Fast CPU renderer for synthetic training data generation. No GPU required.
tinygl-synth is a lightweight 3D renderer designed for generating synthetic training data for machine learning. It runs entirely on CPU and produces:
- RGB images - rendered scenes
- Depth maps - per-pixel depth values
- Segmentation masks - per-object instance IDs
- Surface normals - for 3D understanding
Speed: ~2500 samples/second on a typical CPU.
| Problem | tinygl-synth Solution |
|---|---|
| Labeling data is expensive | Ground truth is FREE - we generate it! |
| Need GPU for rendering | Runs on CPU, anywhere |
| Complex dependencies | Zero external dependencies (C library) |
| Slow iteration | Generate -> train -> iterate in seconds |
Generate training data for Visual Question Answering:
python examples/demo_spatial_reasoning.pyOutput: 5000 scenes with 50,000 QA pairs in ~3 seconds!
Example questions automatically generated:
- "How many red objects?" -> 2
- "Which color is highest?" -> yellow
- "Are there more blue than green?" -> no
mkdir build && cd build
cmake ..
cmake --build . --config Release
cd ..On Windows, build rules copy tinygl_synth.dll next to example/test executables automatically.
cd python
pip install -e .
pip install imageio opencv-python # For demos
cd ..Note: The -e flag installs in "editable" mode - changes to Python code take effect immediately.
python examples/verify_qa.pyThis creates samples/verify_qa_grid.png with labeled images you can manually verify.
python examples/demo_spatial_reasoning.py
python examples/demo_spin_cube.py # Multi-modal output
python examples/demo_speed_test.py # Speed benchmarkfrom tinygl_synth import Context
# Create 256x256 rendering context
ctx = Context(256, 256)
# Clear with background color
ctx.clear(50, 50, 60, 1.0)
# Set camera (fov, near, far, view_matrix)
ctx.set_camera(60.0, 0.1, 10.0, view_matrix)
# Add colored mesh
ctx.add_mesh_colored(vertices, indices, object_id=1)
# Render
ctx.render()
# Get outputs (numpy arrays)
rgb = ctx.rgb_tensor() # (H, W, 3) uint8
depth = ctx.depth_tensor() # (H, W) float32
seg = ctx.segmentation_tensor() # (H, W) uint32| Example | Description |
|---|---|
demo_spatial_reasoning.py |
Generate VQA training data with automatic labels |
demo_speed_test.py |
Speed benchmark - colorful orbiting cubes |
demo_spin_cube.py |
Simple spinning cube with multi-modal output |
pose_estimation_demo.py |
Train pose estimator on synthetic data |
train_pytorch_policy.py |
Full PyTorch training pipeline |
Located in samples/:
| File | Description |
|---|---|
spatial_reasoning_grid.png |
36 scenes with varying object counts |
spatial_reasoning_demo.gif |
Animated scenes with labels |
speed_test.gif |
High-FPS colorful rendering |
pose_estimation_demo.png |
Sub-degree pose accuracy |
multimodal_output.gif |
RGB + Depth + Segmentation |
On Intel i7 (single-threaded):
- 2500+ samples/sec at 128×128
- 700+ samples/sec at 256×256
- 120+ FPS for animated demos
- VLM Training Data - Spatial reasoning, counting, relationships
- Object Detection - Synthetic pretraining with perfect labels
- Depth Estimation - RGB→Depth with free ground truth
- Pose Estimation - 6DoF object pose from synthetic scenes
- RL Environments - Fast vision-based observations
#include <tinygl_synth/synth.h>
TGLSynthContext* ctx = tgls_create_context(256, 256);
tgls_clear(ctx, 50, 50, 60, 1.0f);
tgls_set_camera(ctx, fov, near, far, view_matrix);
tgls_add_mesh(ctx, vertices, n_verts, indices, n_indices, object_id);
tgls_render(ctx);
uint8_t* rgb = tgls_get_rgb_buffer(ctx);
float* depth = tgls_get_depth_buffer(ctx);
uint32_t* seg = tgls_get_segmentation_buffer(ctx);
tgls_destroy_context(ctx);- Windows: supported and tested (
MSVC+ CMake). - Linux/macOS: source code and CMake support both (
.so/.dylibnaming handled in Python loader), but CI is not set up yet.
tinygl-synth/
├── src/ # C source code
├── include/ # C headers
├── python/ # Python bindings
├── examples/ # Demo scripts
├── samples/ # Output showcase
└── tests/ # C tests
MIT License - see LICENSE

