-
Notifications
You must be signed in to change notification settings - Fork 7
[Ecosystem] Helion #58
Description
Contact emails
clementab@meta.com, oulgen@meta.com, jansel@meta.com, willfeng@meta.com, jongsokchoi@meta.com, mhoehnerbach@meta.com
Project summary
Helion is a Python-embedded domain-specific language (DSL) for authoring machine learning kernels, designed to compile down to Triton, a performant backend for programming GPUs and other devices.
Project description
What Helion does: Helion is a PyTorch-native DSL for authoring high-performance machine learning kernels. It makes writing fast, scalable ML kernels dramatically easier and addresses the dichotomy between high-level productivity and low-level control. Helion establishes a new layer of abstraction that bridges the user-friendly simplicity of PyTorch with the performance of a lower level language. Its core innovation are:
PyTorch-Native Syntax: Write kernels using familiar PyTorch idioms
Advanced Autotuning: Explores a vast space of kernel configurations automatically
Hardware Portability: Single kernel definition works across NVIDIA, AMD, Intel GPUs, and other hardware accelerators
Why it should be part of the PyTorch Foundation: Helion delivers clear technical and community value aligned with PyTorch’s mission:
It makes writing high-performance kernels accessible with minimal boilerplate, enabling researchers and practitioners to translate ideas into efficient GPU code quickly. It integrates naturally with PyTorch, supports heterogeneous hardware, and offers practical developer ergonomics. These capabilities directly strengthen PyTorch’s ecosystem for performance-centric research and production workloads.
Its permissive BSD-3 license and Python-first design encourage broad adoption, downstream innovation, and contributions from academia and industry alike.
The project’s momentum demonstrates healthy community engagement and sustained maintenance, which are the hallmarks of a vibrant, growing open-source project.
Keeping Helion within the PyTorch Foundation ensures:
Stable, neutral governance that sustains quality, security, and long-term maintenance.
A consistent contributor experience—shared tooling, documentation norms, CI expectations, and community standards—lowering the barrier to entry for new contributors and maintainers.
PyTorch Foundation stewardship is a clear, public commitment to open source and community-first development. Positioning Helion within the Foundation underscores that we are building for the community. The Foundation’s neutral governance, reputation, and programmatic support will help us reach a larger and more diverse audience—spanning academia, startups, and enterprises—and explicitly demonstrate that we welcome and rely on many external contributors and maintainers. This alignment makes Helion a more discoverable, trusted, and inclusive home for current and future contributors.
Are there any other projects in the PyTorch Ecosystem similar to yours? If, yes, what are they?
No
Project repo URL
https://github.com/pytorch/helion
Additional repos in scope of the application
No response
Project license
BSD-3
GitHub handles of the project maintainer(s)
jansel, oulgen, yf225, choijon5, v0i0
Is there a corporate or academic entity backing this project? If so, please provide the name and URL of the entity.
No response
Website URL
Documentation
How do you build and test the project today (continuous integration)? Please describe.
https://github.com/pytorch/helion/actions
Version of PyTorch
PyTorch 2.9+
Components of PyTorch
TorchInductor, FX Graph
How long do you expect to maintain the project?
Long term project, and we have several maintainers.
Some Core Maintainers:
Jason Ansel, https://github.com/jansel, Meta
Oguz Ulgen, https://github.com/oulgen, Meta
Will Feng, https://github.com/yf225, Meta
Jongsok Choi, https://github.com/choijon5, Meta
Markus Hoehnerbach, https://github.com/v0i0, Meta
Outside of Meta, we have contributions from NVIDIA, AMD, Intel, IBM, RedHat, and the OSS community.
Eikan Wang, https://github.com/EikanWang, Intel
Alessandro Sangiorgi, https://github.com/fulvius31, RedHat
Adam Siemieniuk, https://github.com/adam-smnk, Intel Labs
Astarag Mohapatra, https://github.com/Athe-kunal , Evidium
Parshant Sharma, https://github.com/parsshar-RH, RedHat
Umesh Chand, https://github.com/umechand-amd, AMD
Hinrik Snær Guðmundsson, https://github.com/hinriksnaer, RedHat
Su Tong, https://github.com/Stonepia, RedHat
Yinuo Liu, https://github.com/qelk123, NVIDIA
Burkhard Ringlein, https://github.com/bringlein, IBM
See full list of contributors in https://github.com/pytorch/helion/graphs/contributors
Additional information
Having been launched at PTC 2025, Helion adoption is still in its early stages. It is currently being used by research labs, by OSS frameworks like vLLM, and by internal production teams at Meta.
IBM Research has implemented the unified_attention_2d in Helion as a new experimental backend in vLLM. They will have a PyTorch Blog post on this soon.
Our RFC on integrating Helion into vLLM has been approved by the vLLM community. Helion is now officially in vLLM as an optional dependency to vLLM, enabling integration of Helion kernels in vLLM.
Helion being used by another frontier lab (cannot be disclosed at the moment)
Internal production teams at Meta use Helion and have achieved better performance compared to expert-tuned Triton and CuteDSL kernels.
Metadata
Metadata
Assignees
Type
Projects
Status
Status