algonaut-record.mp4
Algonaut is an on-demand, container-based code execution editor platform. Each editor session is called a cube, which runs in its own isolated Docker container, offering live sockets, persistent storage, and language-specific runtime environments.
Think: spin up a coding sandbox only when needed, isolate it, sync via sockets, and kill it when done — all using Redis and Rust.
- A Cube is a lightweight, ephemeral code editor environment.
- Each cube is represented as a standalone container.
- Exposes two ports:
- 🚪 Port 1: REST API for project/file operations
- 📡 Port 2: WebSocket for live sync and execution
- A Rust-based orchestrator continuously polls a Redis queue (
execution
) - New jobs are pulled every 10 seconds
- Uses CLI-based Docker commands to:
- Spin up containers on-demand
- Attach containers to a Docker network
- Rehydrate containers with previous session data
- Gracefully stop & remove containers after execution
- On teardown, the container file system is uploaded to persistent storage
Layer | Tech |
---|---|
Orchestration | Rust (tokio , redis , dotenvy ) |
Runtime Execution | Node.js / TypeScript-based containers |
Message Queue | Redis |
Persistence | S3-compatible backend |
Containerization | Docker |
To trigger a cube, push a job into the Redis queue execution
:
{
"id": "cube_abc123",
"user_id": "user_xyz",
"name": "PythonSandbox",
"cube_type": "python"
}
- 🧠 Frontend sends a request to launch a cube
- 📨 Backend pushes a job to Redis (
execution
) - ⚙️ Algonaut orchestrator (Rust) pulls job from queue
- 🐳 Docker container is created and attached to network
- 🌐 Cube becomes accessible on allocated ports
- 💾 On shutdown, filesystem is backed up to S3
Create a .env
file at the root:
REDIS_URL=redis://localhost/
DOCKER_USER_CONTAINER_IMAGE=algonaut-runtime-node
DOCKER_NETWORK=algonaut-network
Each cube is backed by a runtime container which:
- Exposes a REST API for:
GET /project
→ returns file systemPUT /reinit
→ restores prior session data
- Optionally hosts a WebSocket server for real-time collaboration or execution triggers
The container is designed to be language-agnostic and scriptable for multiple runtimes (Python, JS, Go, etc.)
# build
cargo build --release
# run orchestrator
cargo run
This will start polling Redis every 10 seconds and orchestrate containers dynamically.
- Supports hot container recovery and backup
- Modular design (easily pluggable runtimes)
- Easy to deploy with Docker Compose
- Can be integrated into interview platforms, learning environments, or cloud IDEs
- ✅ Redis-based job queue
- ✅ Docker CLI orchestration
- ✅ REST & WebSocket runtime support
- ✅ Persistent session backup (S3)
- 🔲 Language runtime abstraction
- 🔲 Collaboration engine (Yjs or CRDT)
- 🔲 Web dashboard for admins