Flambeau is a Nim wrapper for PyTorch's LibTorch (C++ API). It provides:
- Low-level C++ FFI bindings to PyTorch tensors and operations
- High-level idiomatic Nim API with type safety
- Neural network modules (Linear, Conv2d, Dropout, etc.)
- Automatic differentiation support
- GPU acceleration via CUDA
flambeau/
├── flambeau/raw/bindings/ # Low-level C++ FFI bindings
│ ├── rawtensors.nim # Core tensor operations (880+ lines)
│ ├── c10.nim # C10 library types (ArrayRef, Scalar, etc.)
│ ├── neural_nets.nim # NN modules (Linear, Conv2d, etc.)
│ └── data_api.nim # Data loaders and datasets
├── flambeau/raw/sugar/ # Syntax sugar and macros
│ ├── indexing_macros.nim # Fancy slicing syntax (_..^2, etc.)
│ └── rawinterop.nim # Conversion utilities
├── flambeau/tensors/ # High-level tensor API
│ ├── accessors.nim # Element access and indexing
│ ├── operators.nim # Arithmetic operators (+, -, *, /)
│ ├── aggregate.nim # sum, mean, max, min, etc.
│ ├── mathalgo.nim # Math functions (sin, cos, sqrt, etc.)
│ ├── fft.nim # Fast Fourier Transform
│ └── fancy_index.nim # Advanced indexing with indexedMutate
├── flambeau/tensors.nim # Main high-level API
├── flambeau/install/ # LibTorch installation utilities
└── examples/ # Usage examples
- ✅ Tensor operations: reshape, transpose, permute, squeeze, unsqueeze
- ✅ Matrix operations: mm, matmul, bmm, qr, luSolve
- ✅ Indexing: Arraymancer-compatible accessors (getIndex, atIndex, atIndexMut)
- ✅ Slicing: Advanced slicing with
_,..,|(step) syntax - ✅ In-place operators:
+=,-=,*=,/=viaindexedMutatemacro - ✅ Math operations: All trig, exp, log, sqrt, pow, etc.
- ✅ Aggregation: sum, mean, max, min, variance, stddev, argmax, argmin
- ✅ FFT: Complete 1D/2D/ND FFT support
- ✅ Neural networks: Forward/backward pass, gradient descent
- ✅ Apple Silicon: Automatic Rosetta 2 compilation for x86_64 LibTorch
- ✅ CUDA: Support up to CUDA 12.8
- All tensor tests passing (7 test files)
- XOR neural network demo: 100% accuracy
- Some NN module tests fail due to Nim compiler internal error (pre-existing)
High-level Tensor[T] wraps low-level RawTensor. Always convert:
# High-level -> Raw
func mm*[T](t, other: Tensor[T]): Tensor[T] =
asTensor[T](rawtensors.mm(asRaw(t), asRaw(other)))Required for in-place operations on indexed tensors:
indexedMutate:
a[1, 2] += 10 # Transforms to: a[1, 2] = a[1, 2] + 10PyTorch uses ArrayRef for passing arrays. Convert Nim arrays:
let dims = [2'i64, 3'i64, 4'i64]
let tensor = zeros[float32](dims) # dims.asTorchView() happens internallyUse PyTorch's Scalar type for numeric arguments to C++ functions.
LibTorch for macOS is x86_64 only. config.nims forces x86_64 compilation:
when defined(macosx) and defined(arm64):
switch("cpu", "amd64")
switch("passC", "-arch x86_64")
switch("passL", "-arch x86_64")File: flambeau/tensors/accessors.nim:203
Add idiomatic Nim iterators for looping over tensor elements:
iterator items*[T](t: Tensor[T]): T =
## Iterate over all elements in flattened order
let n = t.numel()
for i in 0..<n:
yield t.atContiguousIndex(i)
iterator pairs*[T](t: Tensor[T]): (int, T) =
## Iterate with flat indices
let n = t.numel()
for i in 0..<n:
yield (i, t.atContiguousIndex(i))
# For 2D tensors
iterator rows*[T](t: Tensor[T]): Tensor[T] =
## Iterate over rows of 2D tensor
let nrows = t.shape()[0]
for i in 0..<nrows:
yield t[i, _]Impact: Makes tensor manipulation more Nim-idiomatic.
File: flambeau/tensors.nim:231-239
Uncomment and implement:
func view_as_real*[T: SomeFloat](self: Tensor[Complex[T]]): Tensor[T] =
## Convert complex tensor to real tensor (last dim becomes size 2)
asTensor[T](rawtensors.view_as_real(asRaw(self)))
func view_as_complex*[T: SomeFloat](self: Tensor[T]): Tensor[Complex[T]] =
## Convert real tensor to complex (last dim must be size 2)
asTensor[Complex[T]](rawtensors.view_as_complex(asRaw(self)))Check if view_as_real and view_as_complex exist in rawtensors.nim. If not, add:
func view_as_real*(self: RawTensor): RawTensor {.importcpp: "torch::view_as_real(@)".}
func view_as_complex*(self: RawTensor): RawTensor {.importcpp: "torch::view_as_complex(@)".}Impact: Essential for signal processing and FFT workflows.
File: flambeau/raw/bindings/rawtensors.nim:613
Add median support (requires tuple handling):
func median*(self: RawTensor): RawTensor {.importcpp: "#.median()".}
func median*(self: RawTensor, axis: int64, keepdim: bool = false): CppTuple2[RawTensor, RawTensor] {.importcpp: "torch::median(@)".}Then add high-level wrapper in flambeau/tensors/aggregate.nim:
func median*[T](self: Tensor[T]): T =
asTensor[T](rawtensors.median(asRaw(self))).item()
func median*[T](self: Tensor[T], axis: int64, keepdim: bool = false): tuple[values: Tensor[T], indices: Tensor[int64]] =
let cppTup = rawtensors.median(asRaw(self), axis, keepdim)
result.values = asTensor[T](cppTup.get(0))
result.indices = asTensor[int64](cppTup.get(1))Impact: Completes statistical operations suite.
File: flambeau/tensors/accessors.nim:201
Add memory layout checking and conversion:
func is_contiguous*[T](t: Tensor[T]): bool =
## Check if tensor is C-contiguous in memory
asRaw(t).is_contiguous()
func strides*[T](t: Tensor[T]): seq[int64] =
## Get strides of tensor
let raw_strides = asRaw(t).strides()
result = newSeq[int64](raw_strides.size())
for i in 0..<raw_strides.size():
result[i] = raw_strides[i]
func contiguous*[T](t: Tensor[T]): Tensor[T] =
## Return a contiguous version of the tensor (already exists in tensors.nim)
asTensor[T](rawtensors.contiguous(asRaw(t)))Check if is_contiguous() and strides() exist in rawtensors.nim, add if missing.
Impact: Better performance control and debugging.
File: tests/tensor/test_accessors_simple.nim:78
Currently skipped. Investigate PyTorch's negative indexing and map to Nim's ^ operator:
# Should work: a[^1] means a[-1] in Python/PyTorchMay require custom macro handling in fancy_index.nim.
File: flambeau.nimble:29, flambeau/libtorch.nim:26
- Remove gtest dependency
- Auto-install LibTorch to
.nimble/bin/ - Document installation process better
- Remove experimental views pragmas (wait for Nim stable support)
- Clean up export statements in
flambeau_raw.nimandflambeau_nn.nim - Add documentation links in
neural_nets.nim
File: tests/raw/test_nn.nim:76 (previously)
The generic init*(T: type Module, options: Options) signature caused:
Error: internal error: expr(skType); unknown symbol
Root Cause: Nim's type inference fails when combining generic type parameters with constructor pragma and object arguments.
Fix Applied: Added non-generic wrapper functions in neural_nets.nim:
newLinear(options)instead ofLinear.init(options)newConv2d(options)instead ofConv2d.init(options)newDropout(proba)instead ofDropout.init(proba)
These wrappers use direct importcpp without type parameters, avoiding the inference bug. All Module API tests now pass.
File: flambeau/raw/bindings/rawtensors.nim:384
Bounds checking exists at high level but not consistently at FFI boundary. Consider adding IndexDefect raises at raw layer.
PyTorch doesn't provide ARM64 LibTorch for macOS. Solution in config.nims forces x86_64 compilation + Rosetta 2 execution.
Nim's ^1 (BackwardsIndex) doesn't map directly to PyTorch's negative indexing. Currently skipped in tests.
-
User Rules:
- Never use emojis in code/comments
- Imports at top level (no relative imports)
- Always use
pytestfor testing - Challenge assertions with questions
-
Testing:
- Run tests:
nimble test - Run single test:
nim cpp -r --hints:off tests/tensor/test_name.nim - Use
indexedMutatefor in-place operations in tests
- Run tests:
-
Neural Network Modules:
- Use
newLinear(options)notLinear.init(options) - Use
newConv2d(options)notConv2d.init(options) - Use
newDropout(proba)notDropout.init(proba) - The
new*functions avoid Nim type inference bugs
- Use
-
Git Commits:
- Author:
clonkk <rf.clonk@linuxmail.org> - Simple, descriptive messages
- Don't commit markdown files (unless explicitly requested)
- Author:
- Read
flambeau/tensors.nim- main high-level API entry point - Check
flambeau/raw/bindings/rawtensors.nim- see what's available at FFI level - Look at
examples/nn_xor_complete.nim- working neural network example - Run
nimble test- verify everything still works - Review
TODOS.md- full list of missing functionality
# Install LibTorch
nimble install_libtorch
# Run all tests
nimble test
# Run specific test
nim cpp -r --hints:off tests/tensor/test_transpose.nim
# Check for TODOs
grep -r "TODO" flambeau/
# Verify installation
nim cpp -r examples/tensor_ops_demo.nim
nim cpp -r examples/nn_xor_complete.nim- LibTorch C++ API: https://pytorch.org/cppdocs/
- PyTorch Python API: https://pytorch.org/docs/ (for understanding operations)
- Arraymancer: https://github.com/mratsim/Arraymancer (for API inspiration)
- Flambeau GitHub: Check issues and PRs for context
Last Updated: 2026-01-14 Status: Core functionality complete, 6 high-priority TODOs remain Test Coverage: Excellent (all tensor tests passing)