Skip to content

Conversation

@vincentzed
Copy link
Contributor

@vincentzed vincentzed commented Jan 19, 2026

📌 Description

Later, we will also add topk flashinfer.topk, since the only test in codebase are in tests/utils/test_topk.py and no performance understanding that is tracked.

Motivation: sgl-project/sglang#17243 and other analysis to see if sampling can be improved (relatively trivial time still)

🔍 Related Issues

🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.

✅ Pre-commit Checks

  • I have installed pre-commit by running pip install pre-commit (or used your preferred method).
  • I have installed the hooks with pre-commit install.
  • I have run the hooks manually with pre-commit run --all-files and fixed any reported issues.

If you are unsure about how to set up pre-commit, see the pre-commit documentation.

🧪 Tests

  • Tests have been added or updated as needed.
  • All tests are passing (unittest, etc.).

Reviewer Notes

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 19, 2026

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

  • 🔍 Trigger a full review
✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

🧪 Unit Test Generation v2 is now available!

We have significantly improved our unit test generation capabilities.

To enable: Add this to your .coderabbit.yaml configuration:

reviews:
  finishing_touches:
    unit_tests:
      enabled: true

Try it out by using the @coderabbitai generate unit tests command on your code files or under ✨ Finishing Touches on the walkthrough!

Have feedback? Share your thoughts on our Discord thread!


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @vincentzed, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly expands the FlashInfer benchmarking framework by adding dedicated routines for evaluating the performance of various sampling strategies. This includes fundamental sampling from probability distributions, advanced techniques like Top-P (nucleus) and Top-K sampling, their combined application, and utility functions for probability renormalization and logit masking. The integration ensures that users can now comprehensively assess the efficiency of these critical components in large language model inference workflows, providing valuable insights for optimization and development.

Highlights

  • New Sampling Benchmarks: Introduced a comprehensive suite of benchmarks for various sampling routines, including basic, Top-P, Top-K, combined Top-K/Top-P, probability renormalization, and logit masking.
  • Benchmarking Framework Integration: The existing flashinfer_benchmark.py and flashinfer_benchmark_utils.py files were updated to seamlessly integrate these new sampling routines, allowing for consistent performance evaluation.
  • Detailed Documentation: The benchmarks/README.md was updated to provide clear descriptions of the new sampling APIs, their purpose, and the command-line flags available for configuring sampling benchmarks.
  • Sample Test Cases: Added a variety of sample test configurations in sample_testlist.txt to demonstrate the usage and capabilities of the new sampling benchmarks across different parameters.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces comprehensive benchmark tests for various sampling routines in FlashInfer, which is a great addition for performance tracking. The changes include a new sampling.py routine file, updates to the main benchmark script and utilities to integrate the new tests, and documentation updates in the README.md.

The code is well-structured, but I have a few suggestions to improve maintainability and correctness:

  • Refactor duplicated code in flashinfer_benchmark_utils.py for defining supported compute capabilities.
  • Adhere to PEP 8 naming conventions for functions in the new sampling.py file.
  • Add a reference check to the testTopPRenormProbs benchmark for correctness validation.

Details are in the specific comments. Overall, this is a solid contribution.

Comment on lines +450 to 521
# SAMPLING - supported on all architectures
"sampling_from_probs": {
"7.5": ["cuda"],
"8.0": ["cuda"],
"8.6": ["cuda"],
"8.9": ["cuda"],
"9.0": ["cuda"],
"10.0": ["cuda"],
"10.3": ["cuda"],
"12.0": ["cuda"],
},
"top_p_sampling_from_probs": {
"7.5": ["cuda"],
"8.0": ["cuda"],
"8.6": ["cuda"],
"8.9": ["cuda"],
"9.0": ["cuda"],
"10.0": ["cuda"],
"10.3": ["cuda"],
"12.0": ["cuda"],
},
"top_k_sampling_from_probs": {
"7.5": ["cuda"],
"8.0": ["cuda"],
"8.6": ["cuda"],
"8.9": ["cuda"],
"9.0": ["cuda"],
"10.0": ["cuda"],
"10.3": ["cuda"],
"12.0": ["cuda"],
},
"top_k_top_p_sampling_from_probs": {
"7.5": ["cuda"],
"8.0": ["cuda"],
"8.6": ["cuda"],
"8.9": ["cuda"],
"9.0": ["cuda"],
"10.0": ["cuda"],
"10.3": ["cuda"],
"12.0": ["cuda"],
},
"top_k_renorm_probs": {
"7.5": ["cuda"],
"8.0": ["cuda"],
"8.6": ["cuda"],
"8.9": ["cuda"],
"9.0": ["cuda"],
"10.0": ["cuda"],
"10.3": ["cuda"],
"12.0": ["cuda"],
},
"top_p_renorm_probs": {
"7.5": ["cuda"],
"8.0": ["cuda"],
"8.6": ["cuda"],
"8.9": ["cuda"],
"9.0": ["cuda"],
"10.0": ["cuda"],
"10.3": ["cuda"],
"12.0": ["cuda"],
},
"top_k_mask_logits": {
"7.5": ["cuda"],
"8.0": ["cuda"],
"8.6": ["cuda"],
"8.9": ["cuda"],
"9.0": ["cuda"],
"10.0": ["cuda"],
"10.3": ["cuda"],
"12.0": ["cuda"],
},
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There's a lot of code duplication here for defining the supported compute capabilities for sampling routines. All sampling routines share the same support matrix. To improve maintainability and reduce code duplication, you can define the support dictionary once and reuse it for all sampling routines. A dictionary comprehension can make this more concise.

    # SAMPLING - supported on all architectures
    **{
        routine: {
            "7.5": ["cuda"],
            "8.0": ["cuda"],
            "8.6": ["cuda"],
            "8.9": ["cuda"],
            "9.0": ["cuda"],
            "10.0": ["cuda"],
            "10.3": ["cuda"],
            "12.0": ["cuda"],
        }
        for routine in benchmark_apis["sampling"]
    },
}

Comment on lines +44 to +58
if args.routine == "sampling_from_probs":
return testSamplingFromProbs(args)
if args.routine == "top_p_sampling_from_probs":
return testTopPSamplingFromProbs(args)
if args.routine == "top_k_sampling_from_probs":
return testTopKSamplingFromProbs(args)
if args.routine == "top_k_top_p_sampling_from_probs":
return testTopKTopPSamplingFromProbs(args)
if args.routine == "top_k_renorm_probs":
return testTopKRenormProbs(args)
if args.routine == "top_p_renorm_probs":
return testTopPRenormProbs(args)
if args.routine == "top_k_mask_logits":
return testTopKMaskLogits(args)
raise ValueError(f"Unsupported routine: {args.routine}")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The function names in this file (testSamplingFromProbs, testTopPSamplingFromProbs, etc.) do not follow the PEP 8 style guide, which recommends snake_case for function names. For consistency with the rest of the Python ecosystem and to improve readability, please rename these functions and their definitions. For example, testSamplingFromProbs should be test_sampling_from_probs.

Suggested change
if args.routine == "sampling_from_probs":
return testSamplingFromProbs(args)
if args.routine == "top_p_sampling_from_probs":
return testTopPSamplingFromProbs(args)
if args.routine == "top_k_sampling_from_probs":
return testTopKSamplingFromProbs(args)
if args.routine == "top_k_top_p_sampling_from_probs":
return testTopKTopPSamplingFromProbs(args)
if args.routine == "top_k_renorm_probs":
return testTopKRenormProbs(args)
if args.routine == "top_p_renorm_probs":
return testTopPRenormProbs(args)
if args.routine == "top_k_mask_logits":
return testTopKMaskLogits(args)
raise ValueError(f"Unsupported routine: {args.routine}")
if args.routine == "sampling_from_probs":
return test_sampling_from_probs(args)
if args.routine == "top_p_sampling_from_probs":
return test_top_p_sampling_from_probs(args)
if args.routine == "top_k_sampling_from_probs":
return test_top_k_sampling_from_probs(args)
if args.routine == "top_k_top_p_sampling_from_probs":
return test_top_k_top_p_sampling_from_probs(args)
if args.routine == "top_k_renorm_probs":
return test_top_k_renorm_probs(args)
if args.routine == "top_p_renorm_probs":
return test_top_p_renorm_probs(args)
if args.routine == "top_k_mask_logits":
return test_top_k_mask_logits(args)
raise ValueError(f"Unsupported routine: {args.routine}")

Comment on lines 734 to 748
def testTopPRenormProbs(args):
"""Test top_p_renorm_probs API.

This test:
1. Generates random probability distributions
2. Runs top_p_renorm_probs (renormalize by top-p thresholding)
3. Measures performance metrics

Args:
args: Parsed command line arguments containing test configuration

Returns:
dict: List of dictionaries containing performance results

"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The testTopPRenormProbs function is missing a reference check (refcheck) to validate the correctness of the implementation. Other similar test functions in this file, like testTopKRenormProbs, include this check. Adding a reference implementation using PyTorch and comparing the results would increase confidence in the benchmark's correctness. You can find an example of a PyTorch reference implementation for top-p in tests/utils/test_sampling.py.

@yzh119
Copy link
Collaborator

yzh119 commented Jan 20, 2026

Hi @vincentzed would you mind checking the following files:

and see whether there are some components we can reuse?

Signed-off-by: vincentzed <[email protected]>

style check

Signed-off-by: vincentzed <[email protected]>

minor style change

Signed-off-by: vincentzed <[email protected]>

more

Signed-off-by: vincentzed <[email protected]>
@vincentzed vincentzed force-pushed the vz/fix-benchmark-sampling branch from df66f58 to fddeca5 Compare January 30, 2026 13:37
Signed-off-by: vincentzed <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants