Skip to content

[Feat][GEMV] Implement a Gemv forward kernel#175

Open
RMLYC wants to merge 6 commits intotile-ai:mainfrom
RMLYC:lyc/add_gemv
Open

[Feat][GEMV] Implement a Gemv forward kernel#175
RMLYC wants to merge 6 commits intotile-ai:mainfrom
RMLYC:lyc/add_gemv

Conversation

@RMLYC
Copy link
Collaborator

@RMLYC RMLYC commented Feb 11, 2026

Description

This PR adds the implementation of the gemv operator.

Type of Change

  • Bug fix
  • New operator implementation
  • Performance improvement
  • Documentation update
  • Infrastructure/CI

Checklist

  • I have run pre-commit run --all-files and fixed all linting issues.
  • I have verified that my changes pass local unit tests.

Kernel and Ops

  • Implement TileLang kernel in top/kernels/gemv/
  • Define torch.ops interface in top/ops/gemv.py

Benchmarks and Unit test

  • Implement benchmarks/gemv.py
  • Implement tests/test_<op_name>.py (Compare vs PyTorch Ref)
    • FP16 (close: 1e-3)
    • BF16 (close: 1.6e-2)

@RMLYC RMLYC requested a review from a team February 11, 2026 04:01
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @RMLYC, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a new General Matrix-Vector (GEMV) operator, enhancing the library's capabilities for fundamental linear algebra operations. It includes the low-level kernel implementation optimized with tilelang, a high-level operator interface, and corresponding benchmarks and tests to ensure its correctness and performance. This addition aims to provide an efficient and robust GEMV operation for use in deep learning models.

Highlights

  • New Kernel Implementation: A new GemvKernel has been implemented using tilelang to provide an optimized forward kernel for General Matrix-Vector (GEMV) multiplication.
  • New Operator: A GemvOp operator was introduced, encapsulating the GemvKernel and integrating it into the existing operator framework.
  • Benchmarking: A GemvBenchmark class was added to measure the performance and verify the correctness of the GemvOp against a reference implementation.
  • Unit Testing: Comprehensive unit tests for the GemvOp were included to ensure its functionality and accuracy across various input configurations.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • benchmarks/init.py
    • Imported GemvBenchmark.
    • Added GemvBenchmark to the module's __all__ export list.
  • benchmarks/gemv/init.py
    • Created __init__.py to expose GemvBenchmark.
  • benchmarks/gemv/gemv.py
    • Implemented the GemvBenchmark class, including methods for calculating total FLOPs, memory, generating inputs, and defining a reference program.
  • tests/ops/test_gemv.py
    • Added parameterized pytest tests for the GemvOp covering different dimensions, data types, and tuning options.
  • top/kernels/gemv/init.py
    • Created __init__.py to expose GemvKernel.
  • top/kernels/gemv/gemv.py
    • Implemented the _gemv_kernel function using tilelang.jit for GPU kernel generation.
    • Defined _gemv_wrapped_kernel as a torch.library.custom_op for PyTorch integration.
    • Implemented the GemvKernel class, inheriting from Kernel, with default and autotune configurations for different architectures.
  • top/ops/init.py
    • Imported GemvOp.
    • Added GemvOp to the module's __all__ export list.
  • top/ops/gemv.py
    • Implemented the GemvOp class, inheriting from Op, which dispatches to the GemvKernel.
Activity
  • No specific activity (comments, reviews, or progress updates) was found in the provided context.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new GemvOp operator, including its tilelang-based kernel implementation, benchmarks, and tests. While the implementation is generally well-structured, a critical security concern has been identified: missing bounds checking in the GPU kernel and a lack of input shape validation in the operator's forward method. These omissions can lead to out-of-bounds memory access (both reads and writes) if dimensions are not multiples of block sizes or tensor shapes are mismatched. It is strongly recommended to implement explicit bounds checks within the kernel and add input validation in the operator wrapper. Furthermore, there are a few areas for general code improvement, including a potential issue with a shared class attribute kernel_map in top/ops/gemv.py, a redundant write operation in top/kernels/gemv/gemv.py, and an inconsistency between supported_archs and default_config in the kernel file. Specific details and suggestions are provided in the inline comments.

Copy link
Collaborator

@stelladuyx stelladuyx left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants