Execute commands on your machine from any MCP-compatible AI client.
Code Remote bridges MCP (Model Context Protocol) with your local machine, enabling AI assistants to run shell commands, read/write files, and explore your filesystem - all from your phone or any web browser.
Cloud/Self-Hosted Setup:
+----------------+ +-------------------+ +----------------+
| | SSE | | WS | |
| AI Client |<-------->| Server (Cloud) |<-------->| Agent |
| (MCP) | MCP | | Relay | (Daemon) |
+----------------+ +-------------------+ +----------------+
Local Setup with ngrok (everything on one machine):
+----------------+ +-------------------+ +---------------------------+
| | HTTPS | | | Your Machine |
| AI Client |--------->| ngrok tunnel |--------->| +-------+ +-------+ |
| (MCP) | | | | |Server |<--->| Agent | |
+----------------+ +-------------------+ | +-------+ +-------+ |
+---------------------------+
Data Flow:
- You chat with an AI assistant (Claude.ai, etc.)
- The AI calls MCP tools (run_shell_command, read_file, etc.)
- Server queues commands and sends them to connected agent via WebSocket
- Agent executes commands on your machine and returns results
- Results flow back to the AI through SSE
- uv - Fast Python package manager (required for agent)
curl -LsSf https://astral.sh/uv/install.sh | sh - Python 3.10+ - For running the agent
- Docker - For self-hosted server deployment (Options A & B)
- Fly.io CLI - For managed deployment (Option C only)
Choose your deployment method:
Run everything locally with ngrok for HTTPS tunneling. No cloud deployment needed.
# Terminal 1: Start the server
export AUTH_TOKEN=$(openssl rand -hex 32)
echo "AUTH_TOKEN=$AUTH_TOKEN" > .env
docker-compose up -d
# Terminal 2: Start ngrok tunnel
ngrok http 8080
# Note the https://xxx.ngrok-free.app URL
# Terminal 3: Start the agent (connects locally)
cd agent
uv venv && uv pip install -r requirements.txt
source .venv/bin/activate
export AUTH_TOKEN=$(cat ../.env | cut -d= -f2)
export RELAY_URL=ws://localhost:8080/ws/agent
python agent.pyAdd to your MCP client:
- URL:
https://xxx.ngrok-free.app/sse(your ngrok URL)
Note: Free ngrok tier gives a random URL that changes on restart. Paid plans offer stable subdomains.
Run the server on your own infrastructure with a proper domain.
# 1. Generate an auth token
export AUTH_TOKEN=$(openssl rand -hex 32)
echo "AUTH_TOKEN=$AUTH_TOKEN" > .env
# 2. Start the server
docker-compose up -d
# 3. Set up HTTPS (see HTTPS Setup section below)Deploy to Fly.io for automatic HTTPS and global availability.
cd server
# Edit fly.toml and set your app name
# Change: app = "YOUR-APP-NAME"
# To: app = "yourname-code-remote"
fly launch --name YOUR-APP-NAME --region dfw --no-deploy
fly volumes create code_remote_data --size 1 --region dfw
fly secrets set AUTH_TOKEN=$(openssl rand -hex 32)
fly deployYour URLs will be:
- MCP endpoint:
https://YOUR-APP-NAME.fly.dev/sse - WebSocket:
wss://YOUR-APP-NAME.fly.dev/ws/agent
Native agent (runs directly on your machine):
cd agent
./setup.sh
# Enter your relay URL (e.g., wss://your-server.example.com/ws/agent)
# Enter your auth token
./run.shSandboxed agent (runs in Docker container):
# Run full stack with containerized agent
docker-compose --profile full up -dThe sandboxed agent runs as a non-root user in an isolated Linux environment with common development tools (git, curl, vim, ripgrep, etc.). Files are restricted to /home/agent/workspace inside the container.
Add to your MCP client settings:
- URL:
https://your-server.example.com/sse
For Claude.ai: Settings > Connectors > MCP
When deploying your own instance:
| File | What to Change |
|---|---|
server/fly.toml |
app = "YOUR-APP-NAME" |
The setup.sh script will prompt for your relay URL and auth token, creating the .env file automatically. You only need to edit fly.toml before deploying.
| Tool | Description |
|---|---|
run_shell_command |
Execute any shell command |
read_file |
Read file contents |
write_file |
Write content to a file |
list_directory |
List directory contents |
check_agent_status |
Verify agent is connected |
- Setup Guide - Complete installation instructions
- Deployment - Fly.io deployment and management
- Architecture - System design and security
MCP clients typically require HTTPS. If you're self-hosting with Docker, you'll need a reverse proxy. Here's a simple example with Caddy:
# Install Caddy (https://caddyserver.com/docs/install)
# Create Caddyfile:
cat > Caddyfile << 'EOF'
your-domain.com {
reverse_proxy localhost:8080
}
EOF
# Run Caddy (automatically provisions TLS via Let's Encrypt)
caddy runFor other options:
- nginx: Use certbot for Let's Encrypt certificates
- Traefik: Built-in ACME support for automatic TLS
- Cloudflare Tunnel: Free option that handles TLS for you
- Token-based authentication between all components
- Agent restricts file access to home directory, /tmp, /var/tmp
- All traffic encrypted via HTTPS/WSS
- Commands logged to SQLite for audit trail
- Sandboxed mode: Run the agent in a Docker container for full isolation
code-remote/
├── docker-compose.yml # Self-hosted deployment
├── .env.example # Environment template
│
├── server/ # Server (Docker or Fly.io)
│ ├── server.py # Combined MCP + Relay server
│ ├── Dockerfile
│ ├── fly.toml # Fly.io config (change app name)
│ └── requirements.txt
│
├── agent/ # Agent (native or containerized)
│ ├── agent.py # WebSocket client daemon
│ ├── Dockerfile # Sandboxed Linux agent
│ ├── requirements.txt
│ ├── setup.sh # Setup script (prompts for URL)
│ ├── run.sh # Run script
│ └── com.code.remote-agent.plist
│
└── docs/ # Documentation
├── SETUP.md
├── DEPLOYMENT.md
└── ARCHITECTURE.md
This project is licensed under the GNU Affero General Public License v3.0 (AGPL-3.0). See LICENSE.
Drew Cain (@groksrc)