Skip to content

fix api breaking changes#2832

Open
aleozlx wants to merge 2 commits intoflashinfer-ai:mainfrom
aleozlx:fix_0.6.7
Open

fix api breaking changes#2832
aleozlx wants to merge 2 commits intoflashinfer-ai:mainfrom
aleozlx:fix_0.6.7

Conversation

@aleozlx
Copy link
Collaborator

@aleozlx aleozlx commented Mar 20, 2026

📌 Description

fix api breaking changes for 0.6.7 release

🔍 Related Issues

🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.

✅ Pre-commit Checks

  • I have installed pre-commit by running pip install pre-commit (or used your preferred method).
  • I have installed the hooks with pre-commit install.
  • I have run the hooks manually with pre-commit run --all-files and fixed any reported issues.

If you are unsure about how to set up pre-commit, see the pre-commit documentation.

🧪 Tests

  • Tests have been added or updated as needed.
  • All tests are passing (unittest, etc.).

Reviewer Notes

Summary by CodeRabbit

  • Enhancements
    • Normalization routines now accept scale as either a float or tensor; passing a float emits a deprecation warning and is auto-converted for compatibility.
    • Attention/decoding API: cache-scale parameters are now optional keyword arguments with sensible defaults, simplifying common call patterns.

@aleozlx aleozlx added the v0.6.7 release blocker label for 0.6.7 label Mar 20, 2026
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses API breaking changes for the upcoming 0.6.7 release by refining function signatures across several modules. It primarily focuses on making certain cache-related parameters keyword-only to improve API stability and clarity. Additionally, it enhances the flexibility of quantization scale handling in normalization functions by temporarily allowing float inputs while guiding users towards a more robust torch.Tensor approach with a deprecation warning.

Highlights

  • API Signature Changes: The xqa_batch_decode_with_kv_cache function in flashinfer/decode.py and the xqa function in flashinfer/xqa.py had their k_cache_sf and v_cache_sf (or k_sf_cache and v_sf_cache) parameters moved to keyword-only arguments to prevent breaking changes.
  • Quantization Scale Handling: The _normalize_scale_tensor, rmsnorm_quant, and fused_add_rmsnorm_quant functions in flashinfer/norm/__init__.py were updated to accept scale as either a float or torch.Tensor, with a deprecation warning issued when a float is provided to encourage future use of torch.Tensor.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 20, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 1a4f2e98-97a7-45b4-a37e-598fe7aeca71

📥 Commits

Reviewing files that changed from the base of the PR and between e35c19e and 1c64dee.

📒 Files selected for processing (1)
  • flashinfer/norm/__init__.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • flashinfer/norm/init.py

📝 Walkthrough

Walkthrough

The PR changes xqa to make KV-cache scale tensors keyword-only with default None and updates callers to pass k_sf_cache/v_sf_cache as keywords. It also relaxes normalization scale parameters to accept float or torch.Tensor, emitting a deprecation warning and converting floats to tensors.

Changes

Cohort / File(s) Summary
XQA API & Callers
flashinfer/xqa.py, flashinfer/decode.py
xqa signature updated: k_sf_cache and v_sf_cache removed from positional args and added as keyword-only optional parameters (*, k_sf_cache=None, v_sf_cache=None). Caller xqa_batch_decode_with_kv_cache updated to pass k_sf_cache= and v_sf_cache=.
Normalization API Flexibility
flashinfer/norm/__init__.py
_normalize_scale_tensor now accepts scale: Union[float, torch.Tensor], emits a FutureWarning when given a float, converts it to a tensor, and type annotations for rmsnorm_quant and fused_add_rmsnorm_quant updated accordingly.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Suggested labels

run-ci

Suggested reviewers

  • yongwww
  • yzh119

Poem

🐇 I hopped through code with nimble paws,
Keywords found their rightful cause,
Floats donned tensors, warned with grace,
KV-scales now sit in their place. ✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 inconclusive)

Check name Status Explanation Resolution
Title check ❓ Inconclusive The title 'fix api breaking changes' is somewhat vague and generic. While it indicates the general nature of the changes (API fixes), it lacks specificity about what APIs were changed or what the breaking changes entailed. Consider a more specific title such as 'Make k_sf_cache and v_sf_cache keyword-only in xqa API' or 'Update xqa and norm APIs for 0.6.7 release' to better clarify the specific changes.
✅ Passed checks (2 passed)
Check name Status Explanation
Description check ✅ Passed The PR description includes the template structure and mentions 'fix api breaking changes for 0.6.7 release', but the Description section is minimal and lacks detail about what breaking changes were fixed. Checklist items are all unchecked and reviewer notes are empty.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Tip

Flake8 can be used to improve the quality of Python code reviews.

Flake8 is a Python linter that wraps PyFlakes, pycodestyle and Ned Batchelder's McCabe script.

To configure Flake8, add a '.flake8' or 'setup.cfg' file to your project root.

See Flake8 Documentation for more details.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces several API changes to improve backward compatibility and future-proofing. The changes in flashinfer/decode.py and flashinfer/xqa.py correctly adapt to making k_sf_cache and v_sf_cache keyword-only arguments, which is a good API design practice. The modifications in flashinfer/norm/__init__.py add backward compatibility for the scale parameter by allowing a float value, while issuing a helpful FutureWarning. The implementation is sound. I have one minor suggestion regarding code style in flashinfer/norm/__init__.py to improve adherence to PEP 8.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
flashinfer/norm/__init__.py (1)

186-187: ⚠️ Potential issue | 🟡 Minor

Update scale parameter docs to match the new API contract.

The docstrings still say scale: torch.Tensor, but the function now accepts float (deprecated) as well. This mismatch will confuse users.

Proposed doc update
-    scale: torch.Tensor
-        Scale factor for quantization, shape (1,).
+    scale: Union[float, torch.Tensor]
+        Quantization scale. `torch.Tensor` of shape (1,) is preferred.
+        Passing `float` is deprecated and kept temporarily for compatibility.

Also applies to: 301-302

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@flashinfer/norm/__init__.py` around lines 186 - 187, Update the docstring for
the parameter named "scale" to reflect the new API contract: change the type
from "torch.Tensor" to indicate it accepts either a torch.Tensor or a float
(with a note that float usage is deprecated), and clarify expected
shape/semantics (e.g., torch.Tensor shape (1,) or scalar float). Locate the two
occurrences in this module where "scale: torch.Tensor" is documented (the block
around the earlier occurrence and the second occurrence near the later
docstring) and update both to "scale: Union[torch.Tensor, float] — Scale factor
for quantization; preferred as torch.Tensor of shape (1,), float is accepted but
deprecated."
🧹 Nitpick comments (1)
flashinfer/norm/__init__.py (1)

65-77: Tighten non-tensor input validation for scale.

Current logic warns for any non-tensor value, but only float input is intended here. Add an explicit type gate so invalid inputs fail with a clear TypeError instead of implicit tensor-construction errors.

Proposed patch
 def _normalize_scale_tensor(
     scale: Union[float, torch.Tensor], ref_tensor: torch.Tensor
 ) -> torch.Tensor:
     """Normalize quantization scale to 1D tensor of shape (1,) on target device."""
-    if not isinstance(scale, torch.Tensor):
+    if not isinstance(scale, torch.Tensor):
+        if not isinstance(scale, float):
+            raise TypeError(
+                f"scale must be float or torch.Tensor, got {type(scale).__name__}"
+            )
         import warnings
 
         warnings.warn(
             "Passing scale as a float is deprecated and will be removed in a future "
             "release. Use a torch.Tensor of shape (1,) instead.",
             FutureWarning,
             stacklevel=3,
         )
         scale = torch.tensor([scale], dtype=torch.float32, device=ref_tensor.device)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@flashinfer/norm/__init__.py` around lines 65 - 77, The current normalization
helper accepts any non-torch.Tensor and attempts to convert it, which masks
invalid types; update the input validation so that if scale is a torch.Tensor
proceed as before, if it's a float emit the existing FutureWarning and convert
via torch.tensor([scale], dtype=torch.float32, device=ref_tensor.device), but if
scale is neither float nor torch.Tensor raise a TypeError with a clear message;
adjust the branch around scale/ref_tensor and the FutureWarning usage to enforce
this type gate and avoid implicit tensor-construction errors.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@flashinfer/norm/__init__.py`:
- Around line 186-187: Update the docstring for the parameter named "scale" to
reflect the new API contract: change the type from "torch.Tensor" to indicate it
accepts either a torch.Tensor or a float (with a note that float usage is
deprecated), and clarify expected shape/semantics (e.g., torch.Tensor shape (1,)
or scalar float). Locate the two occurrences in this module where "scale:
torch.Tensor" is documented (the block around the earlier occurrence and the
second occurrence near the later docstring) and update both to "scale:
Union[torch.Tensor, float] — Scale factor for quantization; preferred as
torch.Tensor of shape (1,), float is accepted but deprecated."

---

Nitpick comments:
In `@flashinfer/norm/__init__.py`:
- Around line 65-77: The current normalization helper accepts any
non-torch.Tensor and attempts to convert it, which masks invalid types; update
the input validation so that if scale is a torch.Tensor proceed as before, if
it's a float emit the existing FutureWarning and convert via
torch.tensor([scale], dtype=torch.float32, device=ref_tensor.device), but if
scale is neither float nor torch.Tensor raise a TypeError with a clear message;
adjust the branch around scale/ref_tensor and the FutureWarning usage to enforce
this type gate and avoid implicit tensor-construction errors.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 87962e9b-6d94-4598-9a79-016ec25b4bbd

📥 Commits

Reviewing files that changed from the base of the PR and between 6f0928c and e35c19e.

📒 Files selected for processing (3)
  • flashinfer/decode.py
  • flashinfer/norm/__init__.py
  • flashinfer/xqa.py

@aleozlx
Copy link
Collaborator Author

aleozlx commented Mar 20, 2026

/bot run

@flashinfer-bot
Copy link
Collaborator

GitLab MR !440 has been created, and the CI pipeline #46621950 is currently running. I'll report back once the pipeline job completes.

Copy link
Contributor

@jimmyzho jimmyzho left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm for decode, just left question for clarity

rcp_out_scale: float = 1.0,
q_seq_len: int = 1,
mask: Optional[torch.Tensor] = None,
*,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does these parameters need to be keyword only?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think it's optional feature (guessing)

to that end the rationale is documented (end of this page)

https://github.com/flashinfer-ai/flashinfer/blob/main/CONTRIBUTING.md

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i'd like to put a * as soon as basic feature are done in the api. the extra things that pile on later passed positionally just gets worse and worse for api stability

imo positional args shouldn't exceed 10, or it becomes harder to maintain

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

great questions! keep them coming

Copy link
Collaborator

@bkryu bkryu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approving norm changes. Thanks @aleozlx

@flashinfer-bot
Copy link
Collaborator

[FAILED] Pipeline #46621950: 6/20 passed

@aleozlx
Copy link
Collaborator Author

aleozlx commented Mar 20, 2026

ugh internal CI has caught errors on xqa.. i'll fix them later today

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

v0.6.7 release blocker label for 0.6.7

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants