FLAME GPU 2.0.0-rc.3 #1336
ptheywood
announced in
Announcements
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
FLAME GPU 2.0.0-rc.3 is the fourth release-candidate for FLAME GPU 2.0.0
As a release-candidate the API should be stable, unless any issues are found during the release-candidate phase which require a breaking change to resolve.
The latest version of FLAME GPU has a change in licensing terms from MIT to a dual license model of AGPL 3.0 and commercial. This requires user contributions to sign our CLA via our CLA bot which will prompt new contributors on creation of a pull request.
There are a number of breaking changes.
nlohmann::json(replacing RapidJSON) with some breaking changes for nan/inf (#1277). Any special limit values (e.g. +/- nan/inf) are written to JSON asNULLand read from JSON asNaN.sm_35) hardware. I.e. CUDA Supported versions are now 12.x to 13.x (Windows requires >= 12.4) (#1302)glibc>= 2.28 unless built from source. (#1228)See the changelog for more detail.
This release-candidate release requires:
>= 3.25.2>= 12.0(or >= 12.4 on Windows) and a Compute Capability>= 5.0NVIDIA GPU.For full version requirements, please see the Requirements section of the README.
Documentation and Support
Installing Pre-compiled Python Binary Wheels
Python binary wheels for
pyflamegpuare not currently distributed via pip, however, they can be installed from the pyflamegpu wheelhouse - whl.flamegpu.com.They can also be installed by downloading the wheel artifacts from this release, and installing the local file via pip.
To install
pyflamegpu 2.0.0rc3fromwhl.flamegpu.com, install via pip with--extra-index-urlor--find-linksand the appropriate URI from whl.flamegpu.com.E.g. to install the latest pyflamegpu build for CUDA 13.x without visualiastion:
To install
pyflamegpu 2.0.0rc3manually, download the appropriate.whlfile for your platform, and install it into your python environment using pip. I.e. for CUDA 13 under linux with python 3.10:CUDA 12.x(>= 12.4 on Windows) orCUDA 13.xincludingnvrtcmust be installed on your system containing a Compute Capability5.0or newer NVIDIA GPU.Python binary wheels are available for x86_64 systems with:
glibc >= 2.28(I.e. Ubuntu >= 13.04, CentOS/RHEL >= 7+, etc.) CHECK THIS3.10-3.1412.x( >= 12.4 on Windows)50 60 70 80 90GPUs75 80 90 100 110 120GPUsWheel filenames are of the format
pyflamegpu-2.0.0rc3+cuda<CUDA>[.vis]-cp<PYTHON>-cp<PYTHON>-<platform>.whl, where:cuda<CUDA>encodes the CUDA version used.visindicates visualisation support is includedcp<PYTHON>identifies the python version<platform>identifies the OS/CPU ArchitectureFor Example:
pyflamegpu-2.0.0rc3+cuda120-cp310-cp310-linux_x86_64.whlis a CUDA 12.0-12.x compatible wheel, without visualisation support, for python 3.10 on Linux x86_64.pyflamegpu-2.0.0rc3+cuda130.vis-cp314-cp314-win_amd64.whlis a CUDA 13.x compatible wheel, with visualisation support, for python 3.14 on Windows 64-bit.Building FLAME GPU from Source
For instructions on building FLAME GPU from source, please see the Building FLAME GPU section of the README.
Known Issues
invalid argumenterrors when embedded PTX is used to execute on a higher compute capability device. Upgrading to 461 (CUDA 12.6 Update 3) or ensuring you compile with the correctCMAKE_CUDA_ARCHITECTURESappears to resolve this issue. See #1253 for more information.This discussion was created from the release FLAME GPU 2.0.0-rc.3.
Beta Was this translation helpful? Give feedback.
All reactions