-
Jaxlib version 0.4.24 released on Feb 6 2024 dropped support of CUDA 12.2 which had its latest release in August 2023. Cuda 12.3 was released on October 2023. This means that CUDA 12.2 was supported for approximately four months after a newer minor version was released. While I understand that there must be a good reason to drop the support for older versions so quickly, unfortunately, I fail to see it. I'm curious about what these reasons might be. I would expect older versions to be supported for as long as Nvidia will support them which, if I understand correctly, will be until June 2026. References: |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 4 replies
-
Simple enough: we are making use of newer CUDA features. In fact we're eager to upgrade to CUDA 12.4 since it fixes some known deadlocks in CUDA 12.3. NVIDIA don't retroactively make bug fixes or improvements to older CUDA releases as a rule, so if you're using an older release you will probably have a worse experience. (We're looking into allowing some amount of backward compatibility in what we release, the fundamental blocker being testing at the moment. However the reason we're doing this is for PyTorch interoperability, since PyTorch is much slower to upgrade.) However, given that CUDA is easy to install via |
Beta Was this translation helpful? Give feedback.
Simple enough: we are making use of newer CUDA features. In fact we're eager to upgrade to CUDA 12.4 since it fixes some known deadlocks in CUDA 12.3. NVIDIA don't retroactively make bug fixes or improvements to older CUDA releases as a rule, so if you're using an older release you will probably have a worse experience.
(We're looking into allowing some amount of backward compatibility in what we release, the fundamental blocker being testing at the moment. However the reason we're doing this is for PyTorch interoperability, since PyTorch is much slower to upgrade.)
However, given that CUDA is easy to install via
pip
these days, why is it difficult for you to upgrade?