Elm-architecture Telegram bot for managing Vast.ai GPU instances.
Rent, deploy, and destroy GPU instances on Vast.ai via Telegram commands.
Built with immutable state, pure update(), and dependency-injected handlers — no global state.
bot.py — entry point: wires config → client → store → handlers → telegram app
src/vast_automation/
model.py — BotState (frozen) + Msg hierarchy + BootEvent types
update.py — pure update(state, msg) → BotState
store.py — Store: mutable cell, persists to JSON on every dispatch
config.py — RuntimeConfig, OfferFilter, InstanceSpec, DeployConfig (frozen dataclasses)
vast.py — VastClient dataclass + boot_instance() async generator
ssh.py — SSHTarget dataclass + run_ssh / run_scp
deploy.py — run_deploy() async coroutine, decoupled from Telegram
handlers/
lifecycle.py — make_lifecycle_handlers() factory (/server_up, /stop, /kill)
info.py — make_info_handlers() factory (/start, /status, /logs, /debug)
Connection details (host, port, key path, user) are bundled into a single
SSHTarget value object defined in ssh.py. Handlers construct one
SSHTarget after boot and pass it through the entire call chain — no loose
host/port/key triplets leak across module boundaries.
pip install vast-automation
cp .env.example .env
# edit .env with your tokens
vast-bot| Command | Description |
|---|---|
/server_up |
Rent cheapest eligible GPU, deploy, start services |
/stop |
Stop the running instance (preserves disk) |
/kill |
Destroy the instance immediately |
/status |
Show instance ID, URL, SSH command |
/logs |
Tail last 30 lines of docker compose logs |
/debug |
Check API key and available offers |
All config is via environment variables (see .env.example).
| Variable | Required | Default |
|---|---|---|
TELEGRAM_TOKEN |
Yes | — |
VAST_API_KEY |
Yes | — |
VAST_SSH_KEY_PATH |
No | ~/.ssh/vast |
VAST_TEMPLATE_HASH |
Yes | — |
VAST_IMAGE |
Yes | — |
VAST_GPU_NAME |
No | RTX 4090 |
CLOUDFLARE_TUNNEL_TOKEN |
No | — |
WEBAPP_URL |
No | — |
MIT