Releases: AmusementClub/vs-mlrt
v15.15: latest TensorRT libraries
Compared to the previous stable (v15.14) release:
TRT
- Upgraded to TensorRT 10.15.1.
General
- Upgraded to CUDA 13.1.1 and cuDNN 9.19.0.
vsmlrt.py
- Added support for ArtCNN v1.5.0 models.
- Added bf16 I/O support for the
MIGXbackend.
Full Changelog: v15.14...v15.15
v15.14.rtx: latest TensorRT-RTX libraries
This is a pre-release for the experimental TRT_RTX backend. There may be further updates to this pre-release.
TRT-RTX
- Upgraded to TensorRT-RTX 1.2.
- Added engine validity check for debugging invalid engines.
Full Changelog: v15.14...v15.14.rtx
v15.14: latest TensorRT libraries
Compared to the previous stable (v15.13) release:
TRT
- Upgraded to TensorRT 10.14.1.
General
- Upgraded to CUDA 13.0.2 and cuDNN 9.13.0.
ORT
- Upgraded to ONNX Runtime 1.23.0 (
ecb26fb).
NCNN_VK
- Upgraded to the latest ncnn (
86efe80) to fix hangs with NVIDIA 565 or later drivers. - Added support for fp16 I/O, similar to other existing supported backends.
vsmlrt.py
- Added support for ArtCNN R16F96 Chroma model.
- Added
output_formatparameter to non-cuda ort backends. - Added fp16 I/O support for the
TRT_RTXbackend. - Added optional support for fp16 conversion using TensorRT model optimizer for
TRT_RTX. - Attempt to regenerate the engine after the failure of engine compilation for
TRT,MIGXandTRT_RTX. - Remove extraneous plugin check by @Rukario in #135
- Improve
TRT_RTXin handling fp16 conversion and standalone usage by @abihf in #140 - fix: use correct path for checking alter engine size by @shssoichiro in #144
Full Changelog: v15.13...v15.14
v15.13.ncnn
This is an experimental pre-release for the latest ncnn library.
NCNN_VK
- Upgraded to the latest ncnn (
86efe80) to fix hangs with NVIDIA 565 or later drivers. - Added support for fp16 I/O, similar to other existing supported backends.
vsmlrt.py
- Added support for ArtCNN v1.4.0 models.
Known issue
- Using the
NCNN_VK(fp16=True)backend on the ArtCNN R8F64 chroma model may exhibit chroma shift with irregular input resolutions.
Full Changelog: v15.13.cu13...v15.13.ncnn
v15.13.cu13: latest TensorRT libraries
This is an experimental pre-release for the latest TensorRT libraries. The TRT_RTX backend is not supported in this pre-release.
TRT
- Upgraded to TensorRT 10.13.3.
General
- Upgraded to CUDA 13.0.1 and cuDNN 9.13.0.
vsmlrt.py
- Attempt to regenerate the engine after the failure of engine compilation for
TRT,MIGXandTRT_RTX.
ORT
- Upgraded to ONNX Runtime 1.23.0 (
ecb26fb).
Full Changelog: v15.13.ort...v15.13.cu13
v15.13.ort: latest ONNX Runtime libraries
This is a pre-release for the latest onnx runtime library.
ORT
- Upgraded to ONNX Runtime 1.23.0 (
4754a1d) and added support for Nvidia RTX 50-series GPUs.- Support for attention operations in ONNX Runtime for LLMs is disabled.
- Support for 900 and 10-series GPUs are dropped from
ORT_CUDA.
General
- Upgraded to cuDNN 9.12.0.
vsmlrt.py
- Added optional support for fp16 conversion using TensorRT model optimizer for
TRT_RTX.
Community contributions
Known issues
- fp16 inference for RIFE v2 and SAFA models, as well as fp32/fp16 inference for some SwinIR models, are not currently working in
TRT_RTX. - The old cudnn v8 installation should be removed; otherwise, DLL loading may not work.
Full Changelog: v15.13.rtx...v15.13.ort
v15.13.rtx: experimental TensorRT-RTX backend
This is a pre-release for the experimental TRT_RTX backend.
TRT-RTX
- Upgraded to TensorRT-RTX 1.1.
vsmlrt.py
- Added support for ArtCNN R16F96 Chroma model.
- Added
output_formatparameter to non-cuda ort backends. - Added fp16 I/O support for the
TRT_RTXbackend.
Community contributions
Known issues
- fp16 inference for RIFE v2 and SAFA models is currently not supported in the
TRT_RTXbackend.
Full Changelog: v15.13...v15.13.rtx
v15.13: latest TensorRT libraries
TRT
- Upgraded to TensorRT 10.13.0 and CUDA 12.9.1.
vsmlrt.py
- Fix input name.
- Fix error handling for
Expr.
TRT-RTX
- Added support for dynamic shapes.
Full Changelog: v15.12...v15.13
v15.12: latest TensorRT libraries
TRT
- Upgraded to TensorRT 10.12.0.
vsmlrt.py
- Added support for the SAFA v0.5 models.
- Prioritize the use of
akarin.Expr. - Fix tile size check in
SAFA().
misc
- Fix tile size check in
vsortandvsov. - Added experimental support for the TensorRT-RTX library. This
TRT_RTXbackend is under development, check pre-releases with the.rtxsuffix.
Full Changelog: v15.11...v15.12
v15.11.rtx: experimental TensorRT-RTX backend
This pre-release adds experimental support for TensorRT-RTX, and this TRT_RTX backend may not be backward-compatible. This backend is under development, and future changes will be available in the v15.12.rtx pre-release.
Known issues
- Dynamic shape is not supported.
- For the vsmlrt.py wrapper, fp16 processing currently requires the onnxconverter-common package, and fp16 input/output is not supported.
Full Changelog: v15.11...v15.11.rtx