| candle |
Minimalist ML framework for Rust |
Like PyTorch, Training, Various Models |
CPU, CUDA, CUDA NCCL, WASM |
gemm, intel-mkl-src, cudarc, metal, accelerate-src |
2025-06-07 |
| burn |
Burn is a new comprehensive dynamic Deep Learning Framework built using Rust with extreme flexibility, compute efficiency and portability as its primary goals. |
Various backends, Kernel Fusion, Training, Various Models, ONNX |
WGPU, Candle, Torch, Ndarray, Remote |
matrixmultiply, blas-src, libm, openblas-src, ndarray, candle-core, cubecl, cudarc, tch |
2025-06-06 |
| dfdx |
Deep learning in Rust, with shape checked tensors and neural networks |
Compile-time Checking |
CPU, CUDA, WGPU |
gemm, cudarc, wgpu |
2024-01-25 |
| luminal |
Deep learning at the speed of light |
Static Computation Graph, RISC-style arch, Kernel Fusion, Training |
CPU, CUDA, Metal |
matrixmultiply, cudarc, metal-rs |
2025-06-04 |
| autograph |
A machine learning library for Rust |
GPGPU kernels implemented with krnl |
CPU, Vulkan |
krnl, ndarray |
2024-08-19 |
| unda |
General purpose machine learning crate |
Compile to XLA |
XLA |
xla-rs |
2024-06-19 |
| custos |
A minimal OpenCL, CUDA, Vulkan and host CPU array manipulation engine / framework |
Array Manipulation, AutoDiff, Lazy Execution |
CPU, OpenCL, CUDA, Vulkan, NNAPI |
min-cl, libm, ash, naga, nnapi |
2025-06-08 |
| zyx |
Tensor library for machine learning |
Lazy Execution, AutoDiff |
CUDA, OpenCL, WGPU |
wgpu, vulkano, manual bindings to CUDA, OpenCL, and HSA |
2025-05-13 |
| zenu |
A Deep Learning Framework Written in Rust |
Training, AutoDiff |
CPU, CUDA |
cblas, openblas-src, manual binding to CUDA |
2024-12-30 |
| maidenx |
A lightweight and fast AI framework in Rust focused on simplicity and performance |
Educational focus, Mirror PyTorch's arch |
CPU, CUDA |
- |
2025-04-23 |
| ort |
Fast ML inference & training for ONNX models in Rust |
ONNX, Various Backends |
CUDA, TensorRT, OpenVINO, oneDNN, DirectML, QNN, CoreML, ACL, TVM, CANN, etc. |
ort-sys: Unsafe Rust bindings for ONNX Runtime 1.20 |
2025-06-04 |
| tract |
Tiny, no-nonsense, self-contained, Tensorflow and ONNX inference |
ONNX, Tensorflow |
CPU, Metal |
accelerate-src, blis-src, cblas, metal, ndarray, openblas-src, tensorflow, tflitec |
2025-06-06 |
| Kyanite |
A neural network inference library, written in Rust |
ONNX, Graph IR |
CPU, CUDA |
manual binding to CUDA |
2024-07-13 |
| mistral.rs |
Blazingly fast LLM inference |
LLM inference, safetensors, Quantization |
CPU, CUDA, Metal |
mkl, candle, metal, accelerate |
2025-06-06 |
| InfiniLM |
A handwriting transformer model project developed from YdrMaster/llama2.rs |
LLM Inference, Multiple backends supported |
CPU, CUDA, OpenCL, Ascend, etc. |
operators |
2025-02-07 |
| operators |
Multi-hardware support operator library |
Multi-hardware |
CPU, CUDA, OpenCL, Ascend, Cambricon |
clrt, infinirt, cuda-driver |
2025-02-19 |
| crabml |
a fast cross platform AI inference engine using Rust and WebGPU |
LLM Inference, mmap, Quantization |
CPU, WGPU |
vulkano, wgpu |
2025-01-04 |
| diffusion-rs |
Blazingly fast inference of diffusion models |
Diffusion, Quantization, DDUF, Offloading |
CPU, CUDA, Metal, etc. |
cudarc, intel-mkl-src, accelerate-sr, metal, gemm |
2025-04-01 |
| mmnn |
rust-based bash-cli for Neural Network propagation/backpropagation |
bash-cli, json-config |
- |
- |
2025-04-13 |