Skip to content

Harden vLLM and patch utilities for edge cases#454

Open
danielhanchen wants to merge 1 commit intomainfrom
fix/compat-guards
Open

Harden vLLM and patch utilities for edge cases#454
danielhanchen wants to merge 1 commit intomainfrom
fix/compat-guards

Conversation

@danielhanchen
Copy link
Contributor

Summary

  • relax vLLM standby checks and add HF fallback on generate failures
  • fix FP8Linear construction when device arg is unsupported
  • add embed_tokens fallback for missing quant_state keys
  • fix compiler kwargs collisions for generated signatures
  • harden CSM depth decoder and model forward in edge cases
  • add synthetic data kit fallback for empty inputs
  • guard GPT OSS init weights and PEFT BnB dispatch
  • soften bitsandbytes quant_state access in 4bit linear

Testing

  • python -m py_compile unsloth_zoo/vllm_utils.py unsloth_zoo/compiler.py unsloth_zoo/empty_model.py unsloth_zoo/temporary_patches/misc.py unsloth_zoo/temporary_patches/bitsandbytes.py

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @danielhanchen, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the robustness and stability of the unsloth_zoo library by addressing numerous edge cases and potential failure points across various components. It introduces resilient error handling, intelligent fallbacks, and specific patches for model architectures and utility functions. The changes aim to prevent crashes, ensure smoother operation during model loading and inference, and improve overall reliability when dealing with diverse configurations and data conditions.

Highlights

  • vLLM Hardening: Relaxed vLLM standby checks by stripping 'expandable_segments:True' from allocation environment variables and added a Hugging Face fallback mechanism for vLLM generate failures, including conversion of sampling parameters.
  • FP8Linear Construction Fix: Addressed an issue in FP8Linear construction by conditionally passing the 'device' argument only when it is supported by the constructor's signature.
  • Embeddings Fallback: Implemented a fallback for 'embed_tokens' when the primary 'quant_state' key is missing, checking several alternative keys to ensure successful model loading.
  • Compiler Signature Collision Resolution: Fixed compiler kwargs collisions for dynamically generated function signatures, renaming conflicting 'kwargs' parameters to 'kwargs_' and aliasing them within the function body.
  • CSM Model Hardening: Hardened CSM depth decoder and model forward passes to handle edge cases, including cloning 'inputs_embeds' and gracefully managing empty training masks or None depth decoder losses.
  • Synthetic Data Kit Fallback: Introduced a fallback mechanism for the synthetic data kit's 'convert_format' function, providing synthetic QA pairs if the input file is empty, non-existent, or contains invalid data.
  • GPT OSS and PEFT BnB Dispatch Guards: Guarded GPT OSS weight initialization to prevent errors when modules lack expected weight attributes and ensured correct parameter passing (like 'quant_type' and 'compress_statistics') for PEFT BnB dispatch in Linear4bit modules.
  • Bitsandbytes Quant State Access Softening: Softened 'bitsandbytes' 'quant_state' access in 'Linear4bit' by wrapping potential 'AssertionError' in a try-except block and adding a fallback to 'torch.nn.functional.linear' if 'quant_state' is missing.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: f2149cc333

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +423 to +426
def _write_fallback(output_path, format_type, storage_format):
qa_pairs = _fallback_pairs()
os.makedirs(os.path.dirname(output_path), exist_ok=True)
if storage_format == "hf":

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Avoid os.makedirs on empty output dir in fallback

When output_path is just a filename in the current working directory (e.g., "out.jsonl"), os.path.dirname(output_path) returns an empty string. Calling os.makedirs("") raises FileNotFoundError, so the new empty-input fallback path crashes before writing any data. This means the intended fallback fails for the common case of writing to CWD. Guard the mkdir with a non-empty dirname or default to "." before creating the directory.

Useful? React with 👍 / 👎.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a series of hardening measures and patches for edge cases, primarily focusing on vLLM and related utilities. The changes are well-implemented and significantly improve the robustness and user-friendliness of the library. Key improvements include relaxing vLLM standby checks, adding a fallback to Hugging Face's generate method, fixing FP8Linear construction, and providing fallbacks for missing quant_state keys. The code is more resilient to various edge cases, preventing potential crashes. I have one suggestion to refactor a function for better maintainability.

Comment on lines +423 to +468
def _write_fallback(output_path, format_type, storage_format):
qa_pairs = _fallback_pairs()
os.makedirs(os.path.dirname(output_path), exist_ok=True)
if storage_format == "hf":
if format_type == "jsonl":
formatted_pairs = qa_pairs
elif format_type == "alpaca":
formatted_pairs = [
{"instruction": p["question"], "input": "", "output": p["answer"]}
for p in qa_pairs
]
elif format_type == "ft":
formatted_pairs = [
{
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": p["question"]},
{"role": "assistant", "content": p["answer"]},
]
}
for p in qa_pairs
]
elif format_type == "chatml":
formatted_pairs = [
{
"messages": [
{"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": p["question"]},
{"role": "assistant", "content": p["answer"]},
]
}
for p in qa_pairs
]
else:
raise ValueError(f"Unknown format type: {format_type}")
return to_hf_dataset(formatted_pairs, output_path)

if format_type == "jsonl":
return to_jsonl(qa_pairs, output_path)
if format_type == "alpaca":
return to_alpaca(qa_pairs, output_path)
if format_type == "ft":
return to_fine_tuning(qa_pairs, output_path)
if format_type == "chatml":
return to_chatml(qa_pairs, output_path)
raise ValueError(f"Unknown format type: {format_type}")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The _write_fallback function uses a long if/elif chain to handle different format types. This can be refactored to use dictionaries for mapping format types to their respective handler functions. This would make the code more readable, maintainable, and easier to extend with new formats.

    def _write_fallback(output_path, format_type, storage_format):
        qa_pairs = _fallback_pairs()
        os.makedirs(os.path.dirname(output_path), exist_ok=True)

        if storage_format == "hf":
            formatters = {
                "jsonl": lambda pairs: pairs,
                "alpaca": lambda pairs: [
                    {"instruction": p["question"], "input": "", "output": p["answer"]} for p in pairs
                ],
                "ft": lambda pairs: [
                    {
                        "messages": [
                            {"role": "system", "content": "You are a helpful assistant."},
                            {"role": "user", "content": p["question"]},
                            {"role": "assistant", "content": p["answer"]},
                        ]
                    }
                    for p in pairs
                ],
                "chatml": lambda pairs: [
                    {
                        "messages": [
                            {"role": "system", "content": "You are a helpful AI assistant."},
                            {"role": "user", "content": p["question"]},
                            {"role": "assistant", "content": p["answer"]},
                        ]
                    }
                    for p in pairs
                ],
            }
            formatter = formatters.get(format_type)
            if formatter is None:
                raise ValueError(f"Unknown format type for hf storage: {format_type}")
            return to_hf_dataset(formatter(qa_pairs), output_path)

        savers = {
            "jsonl": to_jsonl,
            "alpaca": to_alpaca,
            "ft": to_fine_tuning,
            "chatml": to_chatml,
        }
        saver = savers.get(format_type)
        if saver is None:
            raise ValueError(f"Unknown format type: {format_type}")
        return saver(qa_pairs, output_path)

if sig_match:
sig_text = sig_match.group(1)
replaced_sig = re.sub(
r"(?<!\\*)\\bkwargs\\b(\\s*=\\s*[^,\\n\\)]*)?(?=\\s*,\\s*\\*\\*kwargs)",
if def_match:
insert_at = def_match.end()
new_source = new_source[:insert_at] + "\\n kwargs = kwargs_\\n" + new_source[insert_at:]
pass
raise

patch_function(save_as, "convert_format", convert_format)
pass
TEMPORARY_PATCHES.append(patch_CsmDepthDecoderForCausalLM_forward)


def patch_CsmModel_forward():
TEMPORARY_PATCHES.append(patch_CsmForConditionalGeneration_forward)


def patch_synthetic_data_kit_convert_format():
TEMPORARY_PATCHES.append(patch_CsmForConditionalGeneration_merge)


def patch_GptOss_init_weights():

original = gpt_oss.GptOssPreTrainedModel._init_weights

def _init_weights(self, module):
TEMPORARY_PATCHES.append(patch_GptOss_init_weights)


def patch_peft_bnb_dispatch():
fix_4bit_weight_quant_state_from_module(self)
try:
fix_4bit_weight_quant_state_from_module(self)
except AssertionError:
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant

Comments