-
Notifications
You must be signed in to change notification settings - Fork 402
Google Summer of Code 2026 — list of ideas #190
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
edae080
ff8b5af
d7dbaac
8268103
955df6d
59a5a0b
bd72667
818ce40
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
| @@ -0,0 +1,250 @@ | ||||||
| --- | ||||||
| title: "Google Summer of Code 2026" | ||||||
| weight: 100 | ||||||
| # summary: "Google Summer of Code 2026" | ||||||
| --- | ||||||
|
|
||||||
| The Leela Chess Zero project is quite diverse and has several areas to | ||||||
| contribute to. Below is a list of potential projects for Google Summer of Code | ||||||
| 2026, of varying scope and concreteness. | ||||||
|
|
||||||
| ## Develop Python bindings for the Lc0 backends and the search | ||||||
|
|
||||||
| * **Skills needed:** Python, C++ | ||||||
| * **Difficulty:** Easy / 90 hours | ||||||
|
|
||||||
| The current Python bindings for Lc0 are quite rudimentary and outdated. In order | ||||||
| to make it easier for researchers and enthusiasts to experiment with Lc0, we | ||||||
| need to develop comprehensive Python bindings (using pybind11 or nanobind) for | ||||||
| the Lc0 backends and potentially search. It should be integrated with | ||||||
| `python-chess`. | ||||||
|
|
||||||
| ## Reimplementation of `live.lczero.org` | ||||||
|
|
||||||
| * **Skills needed:** (vanilla) TypeScript, Python | ||||||
| * **Difficulty:** Medium / 175 hours | ||||||
|
|
||||||
| `live.lczero.org` is a [web site](https://lczero.org/blog/2024/11/wcc24-live/) | ||||||
| where the Leela Chess Zero team run the live annotation of important chess | ||||||
| events (e.g., World Chess Championship). The current version was implemented in | ||||||
| hurry in a few days. We'd like to have a more robust and feature-rich | ||||||
| reimplementation, which we target to use in WCC 2026. | ||||||
|
|
||||||
| Some of ideas that we had in mind (for example, showing move tendencies which | ||||||
| are to be extracted deep from the search tree) require also Lc0 engine changes | ||||||
| (C++) — in that case, difficulty is certainly Hard and duration is longer. | ||||||
|
|
||||||
| It's also possible to only implement the C++ part of the project. | ||||||
|
|
||||||
| ## Update the Metal/CoreML backends | ||||||
|
|
||||||
| * **Skills needed:** C++, ObjectiveC++, Metal or CoreML | ||||||
| * **Difficulty:** Medium / 175 hours | ||||||
|
|
||||||
| The lc0 backends for Apple devices have not received the same level of attention | ||||||
| (pun intended) as other backends. Currently we have the metal backend using Metal | ||||||
| Performance Shaders and (real soon now) the onnx-coreml backend using the CoreML | ||||||
| onnxruntime execution provider. There are several potential improvements we have | ||||||
| identified, with potentially more available to someone with deeper understanding | ||||||
| of the aforementioned technologies: | ||||||
|
|
||||||
| 1. Evaluate and improve suggested Metal improvements: | ||||||
| * Fp16 support. [PR2132](https://github.com/LeelaChessZero/lc0/pull/2132) | ||||||
| * Compile the execution graph to speed up subsequent evaluations. | ||||||
| [PR2245](https://github.com/LeelaChessZero/lc0/pull/2245) | ||||||
| * Use constant tensors. | ||||||
| [PR2320](https://github.com/LeelaChessZero/lc0/pull/2320) | ||||||
| 2. Check whether any of the techniques used in | ||||||
| [Metal FlashAttention](https://github.com/philipturner/metal-flash-attention) | ||||||
| are applicable to our nets. | ||||||
| 3. Improve onnxruntime coreml support for variable batch size, most notably the | ||||||
| [Resize](https://github.com/microsoft/onnxruntime/issues/26328) operator. | ||||||
| 4. Add [Attention](https://onnx.ai/onnx/operators/onnx__Attention.html) in | ||||||
| onnxruntime for CoreML using the | ||||||
| [scaled_dot_product_attention](https://apple.github.io/coremltools/source/coremltools.converters.mil.mil.ops.defs.html#coremltools.converters.mil.mil.ops.defs.iOS18.transformers.scaled_dot_product_attention) | ||||||
| operator. | ||||||
|
|
||||||
| ## Have a JavaScript/WebAssembly backend for Lc0 | ||||||
|
|
||||||
| * **Skills needed:** C++, JavaScript, WebAssembly | ||||||
| * **Difficulty:** Medium / 175 hours | ||||||
|
|
||||||
| Having the ability to run Lc0 directly in the browser would allow many use | ||||||
| cases, potentially including integrating Lc0 into lichess. | ||||||
| We had several almost working prototypes of WebAssembly backends for Lc0, but | ||||||
| none of them was ever productionized. | ||||||
|
|
||||||
| The latest attempt is [PR2072](https://github.com/LeelaChessZero/lc0/pull/2072). | ||||||
|
|
||||||
| We used to host [play.lczero.org](https://play.lczero.org) where everyone could | ||||||
| quickly play Lc0 online, which would be nice to revive. The source code for the | ||||||
| old version (that used lc0 running on the server) is | ||||||
| [here](https://github.com/Uriopass/LCPlay). | ||||||
|
|
||||||
| ## Develop and train a CPU-focused backend for Lc0 | ||||||
|
|
||||||
| * **Skills needed:** C++, basic machine learning knowledge | ||||||
| * **Difficulty:** Medium / 175 hours | ||||||
|
|
||||||
| To optimize the scalability of the search algorithm, we need a fast neural | ||||||
| network evaluation. Normally, to saturate the search, we need several high end | ||||||
| GPUs. However, if we had a very fast "mock" backend that would run on CPUs, we | ||||||
| could optimize the search on more modest hardware. | ||||||
|
|
||||||
| We have a few such "mock" | ||||||
| backends ("random" and "trivial"), but they produce unrealistic evaluation and | ||||||
| search results in non-representative tree shapes. | ||||||
|
|
||||||
| The idea is to implement a CPU-focused backend that would be fast and have | ||||||
| reasonable strength. | ||||||
|
|
||||||
| Techniques that can be used include int8 quantization and sparse matrix | ||||||
| multiplication or some of the methods described in | ||||||
| [this paper](https://arxiv.org/abs/2106.10860). | ||||||
|
|
||||||
| ## Update CUDA kernels | ||||||
|
|
||||||
| * **Skills needed:** C++, CUDA | ||||||
| * **Difficulty:** Hard / 175 hours | ||||||
|
|
||||||
| Most of the kernels for the cuda backend haven't been touched since the Volta | ||||||
| architecture, so there is potential for a decent performance improvement with | ||||||
| code tuned for newer GPUs. Additionally, build system updates since the original | ||||||
| kernels were written make it easy to include architecture specific kernels in | ||||||
| the code in a clean way so that there is no performance penalty for older GPUs. | ||||||
|
|
||||||
| ## Port existing backends to the new backend API | ||||||
|
|
||||||
| * **Skills needed:** C++ | ||||||
| * **Difficulty:** Easy / 90 hours | ||||||
|
|
||||||
| Starting with Lc0 v0.32, we have a new backend API that is more flexible. | ||||||
| However, most of the existing backends (CUDA, OpenCL, XLA) still use the old API | ||||||
| through a compatibility wrapper. We need to port them to the new API to be able | ||||||
| to use new features and optimizations. | ||||||
|
|
||||||
| ## Extend UCI protocol with JSON-based input and output information | ||||||
|
|
||||||
| * **Skills needed:** C++ | ||||||
| * **Difficulty:** Easy / 175 hours | ||||||
|
||||||
| * **Difficulty:** Easy / 175 hours | |
| * **Difficulty:** Medium / 175 hours |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are two trailing spaces at the end of this line. Remove the trailing whitespace for consistency.