This repository contains kernels that are maintained, but not necessarily developed by Hugging Face. This mainly concerns kernels:
- That are developed by Hugging Face (such as yamoe).
- Kernels that are useful/high-impact, but where the upstream maintainer does not support kernels yet.
Kernels in this repository are automatically built and uploaded to hf.co/kernels-community.
For your own kernels, we recommend you to develop them under your own GitHub organization and upload them to the Hugging Face Hub under your own namespace. Similarly to models and datasets, the kernels ecosystem is designed to empower the community to share their own kernels on the Hub. Of course, you are free to copy and alter our GitHub actions to build and upload kernels.
If you see an impactful kernel that you think we should host, please open a GitHub issue.
We packaged it as a Hub kernel because it is very impactful and most likely used by transformers, diffusers, or other Hugging Face projects. If you would like to maintain the Hub kernel yourself, we can transfer ownership to you. Please contact us through our shared Slack collab channel (if available) or open a GitHub issue.
Here is a small breakdown of the steps to add a new kernel:
- Create a new directory in the
kernels-communityrepository with the kernel name. - Add a
README.mdfile to the directory, with a link to the kernel's source code, a kernel yaml tag, and some benchmarks. - Add a
flake.nixfile to the directory (you can check other kernels for examples). - Add a
build.tomlfile to the directory where you specify which backend the kernel supports, which dependencies it has, and the source files. - Add a directory to put the kernel's source code (if it's not a triton kernel).
- Add a
torch-extdirectory that will make the kernel accessible from Python using pytorch extension mechanism. - Add a
torch_binding.cppfile to thetorch-extdirectory that registers the kernel as a Torch op (if it's not a triton kernel). - Add a directory with the same name as the kernel inside the
torch-extdirectory, and add a__init__.pyfile to the directory, there you should be able to access the kernel using the._opsnamespace. For triton kernels, you can include all the source files in thetorch-extdirectory. - To test if the kernel builds successfully, you can use the
kernel-builder.
For more details check writing hub kernels and building kernels with Nix, and examples from kernels-community.
When you are done, you can open a PR to the kernels-community repository. Please make sure to title the PR with the kernel name, followed by a semicolon and a short description, for example: example: add example kernel, and do not include build outputs in the PR.
#TODO: Add benchmarking instructions after https://github.com/huggingface/kernels-uvnotes is ready.
