Your entire AI workflow in one self-hosted web UI. One command to install.
BridgesLLM Portal runs on OpenClaw and turns a supported Ubuntu or Debian VPS into a complete browser-based AI workstation β multi-provider agent chat, sandboxed code execution, a shared browser your agent controls while you watch, remote desktop, project management, file manager, email, and more. If OpenClaw is already installed, the portal installer detects it and uses the existing installation.
Stop bouncing between tools. Chat with Claude, Codex, Gemini, or local models. Have your agent browse the web, write code, manage files, send email β all from one tab, on a server you own.
One command. Five minutes.
curl -fsSL https://bridgesllm.ai/install.sh | sudo bash- Ubuntu 22.04+ or Debian 12+
- 3.5 GB RAM minimum (4 GB+ recommended)
- 35 GB free disk space
- Root or sudo access
BridgesLLM Portal is still VPS-first, but Windows users can now test it locally through WSL 2 before renting a server.
Important: this Windows / WSL path is still experimental, currently untested in the field, and under active development. Treat it as a local product preview, not a production deployment target.
Recommended Windows Terminal bootstrapper (PowerShell profile), it checks for WSL/Ubuntu first and installs Ubuntu if missing:
irm https://raw.githubusercontent.com/BridgesLLM-ai/portal/main/installer/install-windows.ps1 | iexIf Ubuntu WSL is already ready and you want to skip the bootstrapper, use the direct installer path:
wsl -u root -- bash -lc "curl -fsSL https://bridgesllm.ai/install.sh | bash -s -- --local"Then open http://localhost:4001 in Windows and skip domain + HTTPS in the setup wizard. Public hosting, custom domains, and internet-facing share links remain VPS features in this beta path.
See docs/WINDOWS_WSL_BETA.md for the full reasoning, caveats, and references.
Visit bridgesllm.ai for live video demos of every feature.
Talk to Claude, Codex, Gemini, or Ollama through the provider path that fits each one β account sign-in where supported, Claude setup-token flow, API keys, or local models. Switch models mid-conversation. Powered by OpenClaw.
Your agent controls a real Chrome browser via CDP β navigating, clicking, filling forms, extracting data β while you watch live on the remote desktop. Ask it to research something, check a page for bugs, or automate a web workflow.
Create projects, edit code in-browser with Monaco Editor, and assign AI agents to tasks. Each project runs in an isolated Docker container. Git integration, live preview, autonomous background agents.
Full graphical desktop via NoVNC β accessible from any device. Run GUI apps, browser automation, or visual workflows without SSH.
Full xterm.js terminal in the browser. Run commands, manage packages, monitor your server β no SSH client needed.
Browse, upload, edit, and manage server files. Drag-and-drop, in-browser editing, archive extraction.
Built-in Stalwart mail server. Read, compose, and send email with rich HTML rendering and attachments β from your own domain.
Schedule recurring AI tasks with cron from the browser. Monitoring, reports, maintenance β runs while you sleep.
Browse and install agent skills from ClawHub with one click. Configure MCP tools and extend your agent's capabilities.
Everything configured in-browser. Domain, SSL, providers, users β no CLI expertise needed. Codex and Gemini support account sign-in, Claude uses the guided setup-token flow, and key-based providers use API keys.
One-click updates from the browser. Admin dashboard with user management, storage monitoring, and session controls.
- Hosted installer/tarball parity is restored for the latest OpenClaw runtime: the public update path now ships the same OpenClaw 2026.4.29 compatibility helper logic that was already present in source, including the newer reply-prefix heartbeat relay matcher and Gemini CLI patch path.
- OpenClaw 2026.4.29 was re-smoked on the test box before release: after the required manual gateway restart on that host, portal-backed model discovery and a disposable chat send both completed successfully again.
- The OpenClaw session sidebar stops choking busy portals: Agent Chat no longer opens large session transcript/checkpoint files just to build the main-session sidebar, which fixes the event-loop stalls that made gateway chat, hosted pages, and shared pages lag or time out on installs with a large OpenClaw session store.
- Main-session polling is bounded now: the portal briefly caches parsed main-session metadata by file stat, so routine session-list refreshes stop re-parsing the whole store over and over on active installs.
- Claude account setup is truthful again: the AI Setup flow now extracts the real Claude authorize URL instead of gluing the terminal prompt onto the
stateparameter, which fixes broken Anthropic sign-in links in the portal. - OpenClaw compatibility checks keep up with newer upstream bundles: the portal hotfix status and bundled helper now recognize
heartbeat-events-filter-*andclaude-live-session-*drift while still verifying the Gemini tool-wiring path. - OpenClaw chat defaults are saner on non-Claude models: main chat and project chat now fall back to
highthinking where adaptive Claude-specific behavior does not apply, and Codex setup copy is clearer about the stable default vs fallback path.
- Gemini account setup now lands on the right default model: the Gemini OAuth flow and default-model picker stay aligned, so a successful sign-in produces a usable provider selection instead of a confusing mismatch.
- Compaction notices are reused and deduped more cleanly: Agent Chat and project chat now share the same compaction notice block, and restored history keeps one truthful compaction notice instead of echoing duplicates.
- Windows / WSL local beta installs are easier to find: the main install hero now gives separate Linux and Windows one-paste commands with dedicated copy buttons, keeping the public release path explicit without internal-only notes.
- The installer now has a proper local beta path on WSL: Windows users can run the localhost profile without Caddy or UFW, which makes the test-drive flow a real first-class entrypoint instead of a hand-waved promise.
- Installer/update compatibility is finally automatic: normal install and update flows now auto-apply the validated OpenClaw relay + Gemini compatibility patch set when needed instead of burying that repair behind a manual Settings step.
- Agent Chat and project chat are more truthful during live OpenClaw runs: capable runtimes now use real interrupt-and-steer behavior, running turns surface an immediate
Thinkingβ¦state, async follow-ups can finish cleanly even if the browser disconnects, and hidden Portal Backend RPC / heartbeat artifact text is stripped out of restored history. - Project chat works in a saner repo context: project agents now run against
/workspace/project, assistant auto-commit is back after successful runs, transient.agent-*scratch files are shelved out of git operations, and revert/auto-commit flows stop confusing portal-maintenance state with user work. - AI setup and provider status got a big honesty pass: Claude setup-token finishing no longer looks frozen, Codex/Gemini native CLI auth is sturdier, provider model IDs/fallbacks are normalized consistently, and provider cleanup removes stale auth/model config together.
- Windows / WSL beta messaging now tells the truth: local installs are labeled as experimental localhost test drives, while public hosting, stable share links, and custom-domain HTTPS remain VPS-first. The release also bundles the Gemini-aware compatibility helper, new AI-setup/OpenClaw parsing tests, and Windows beta notes.
- Live chat-state reconciliation is finally honest: Agent Chat and project chat now preserve pending user turns and the active assistant bubble while history reloads, delay post-turn reconciliation until the gateway catches up, and restore separate thinking, tool, text, and compaction phases on refresh instead of flattening them into stale garbage.
- Tool activity is much clearer while a run is in flight: the composer rail, main chat, and project chat now share tool-specific glyphs and status copy, and running tools stay visible during maintenance or compaction instead of being replaced by a fake generic thinking message.
- Fresh OpenClaw sessions and model controls are more reliable:
new-*portal sessions can be materialized on demand before model patching, session-control loading states are more truthful, and model discovery now reads the live OpenClaw config instead of relying on brittle CLI scraping. - Projects regained richer public-safe preview coverage: Markdown/HTML, PDF, spreadsheet, text, Monaco, and binary-file viewers are back in the public source tree, which fixes clean public builds and improves in-browser file previews.
- Ops and release hardening kept pace: Remote Desktop is locked tighter behind elevated auth and loopback-only websockify, Gemini account OAuth is a first-class setup path, gateway restart fallback is safer on hosts without user-systemd, and the public export script now blocks dirty trees plus beta/staging contamination before a push can happen.
- OpenClaw compatibility hotfix status works again on current installs: the portal now inspects the real hashed
heartbeat-runner-*andget-reply-*bundles, recognizes the newer upstream exec-completion detector, and stops falsely calling modern OpenClaw builds unsupported when the relay hotfix can still be applied safely. - This patch release fixes public release parity, not just local production knowledge: the source tree, installer artifacts, and hosted download now all ship the same compatibility behavior instead of depending on a private manual workaround, and the public source export again contains the lazy project viewer components needed for a clean frontend build.
- Installer and updater users now get the OpenClaw compatibility helper fix too: the bundled long-run relay hotfix script now resolves the real current hashed OpenClaw bundles, patches the right
get-replyfile, and keeps installer/update artifacts aligned with the live production compatibility fix instead of leaving the repair stranded in source only. - This is a clean patch release for distribution parity: public GitHub source, hosted installer, and hosted tarball now all ship the same helper refresh under a proper new version instead of silently changing bits behind
3.25.0.
- Agent Chat and project chat finally act like the same product: both surfaces now share the same status rail, project chat lost the stray inline stop button, misleading thought-process pills are gone, live run status copy is clearer, and project chat gained proper run-resume, approval, reconnect, model-persistence, and live metadata handling.
- Auth, setup, reinstall, and password flows got a serious hardening pass: protected deep links preserve their destination, password policy is enforced consistently across setup and recovery flows, reinstall/reset/password-change paths revoke old sessions correctly, and signed-out pollers stop hammering protected endpoints.
- Permission boundaries are more truthful: non-admin users no longer get unusable exec-approval prompts, dashboard reconnect/update controls respect role boundaries, Feature Readiness is exposed to
SUB_ADMIN, and Tasks, Files, Projects, Apps, and Terminal routes now align with the access the UI actually promises. - Cold-open performance is materially better across the app: Agent Chats, Dashboard, Projects, Files, Mail, and Settings all shed real startup work through route lazy-loading, deferred charts and history fetches, demand-driven direct-gateway bootstrap, and bounded thumbnail loading.
- Operational polish is much better: gateway/auth restart noise is deduped, background task rows are less spammy, setup and admin copy are cleaner, and several empty states and settings controls now read like finished product instead of debug leftovers.
- OpenClaw compatibility and release packaging are tougher: the bundled compatibility helper now patches current OpenClaw bundle shapes, the release tarball no longer risks shipping the placeholder Prisma DB, and the public release path is documented around the actual dev-container SOP.
See the full CHANGELOG for all releases.
flowchart TD
Browser["Your Browser"] -->|HTTPS via Caddy| Portal
subgraph Portal["BridgesLLM Portal"]
UI["React UI\n(Vite SPA)"]
API["Express API\n(Node.js)"]
UI --> API
API --> Gateway["OpenClaw Gateway\nPersistent WebSocket runtime"]
API --> DB["PostgreSQL\nPortal data"]
API --> Docker["Docker sandboxes\nProject isolation"]
API --> Mail["Stalwart Mail\nLoopback mail server"]
end
Gateway --> Claude["Claude\nSetup-token / Extra Usage"]
Gateway --> Codex["Codex\nAccount sign-in"]
Gateway --> Gemini["Gemini\nAccount sign-in"]
Gateway --> Ollama["Ollama\nLocal models"]
- Caddy terminates HTTPS (automatic Let's Encrypt) and reverse-proxies to the backend.
- OpenClaw Gateway manages agent sessions, tool approvals, and provider communication over persistent WebSocket.
- Docker sandboxes isolate each project's code execution from the host.
- Stalwart provides email on the loopback interface β not exposed as an open relay.
BridgesLLM Portal itself is free. Your cost is the combination of:
- your VPS
- the provider path you choose
- your usage pattern
Typical cost components:
| Component | Typical cost model |
|---|---|
| VPS | Usually ~$20β40/mo for a comfortably sized box |
| Codex / Gemini | Account or subscription-style sign-in paths are available |
| Claude | Claude plan plus Anthropic Extra Usage for OpenClaw-driven traffic |
| API-key providers | Usage-based billing |
| Ollama | Local compute on your own server |
There is no single universal monthly total because provider billing differs by path.
| Layer | Technology |
|---|---|
| Frontend | React 19, Vite, Tailwind CSS, Monaco Editor |
| Backend | Node.js, Express, Prisma, PostgreSQL |
| Agent Framework | OpenClaw (open-source) |
| AI Providers | Anthropic (Claude), OpenAI (Codex), Google (Gemini), Ollama (local) |
| Reverse Proxy | Caddy (automatic HTTPS) |
| Containers | Docker (per-project sandboxing) |
| Remote Desktop | NoVNC + Xfce4 |
| Stalwart Mail Server |
Best path: click the Update button in the portal dashboard. Or from SSH:
curl -fsSL https://bridgesllm.ai/install.sh | sudo bash -s -- --updateThe update flow updates the portal and checks installed dependencies, including OpenClaw, so you usually do not need to update OpenClaw separately first. On affected installs it also auto-reapplies the temporary portal compatibility hotfix, so the relay and Gemini compatibility markers do not stay stranded behind a buried Settings button.
Updates preserve your data, projects, and configuration.
Yes. The installer detects an existing OpenClaw installation and uses it. If OpenClaw is not already present, the installer installs it for you.
Yes, in beta form through WSL 2. The local beta path is experimental, currently untested in the field, and meant for hands-on testing on http://localhost:4001, not as the main production deployment model. See docs/WINDOWS_WSL_BETA.md.
No. Codex and Gemini support account sign-in flows. Claude uses the guided setup-token flow and currently requires Anthropic Extra Usage for OpenClaw-driven requests. Key-based providers still use API keys, and Ollama is local.
Yes. Your portal data, files, projects, and local services stay on your server. If you connect external AI providers, model requests still go to the provider you chose.
The installer sets up the portal app, OpenClaw, PostgreSQL, Caddy, and the main system services. The browser setup flow then handles your admin account, provider connection, and domain/SSL steps.
- HTTPS everywhere β automatic Let's Encrypt SSL with HSTS, CSP, and strict security headers
- Sandboxed code execution β each project runs in an isolated Docker container with filesystem restrictions
- Path traversal protection β dedicated middleware blocks directory escapes, symlink attacks, and system path access
- Role-based access control β Owner, Admin, User, and Viewer roles with account approval workflow
- JWT authentication β short-lived access tokens, no query-parameter auth
- Firewall by default β UFW configured during install; only SSH, HTTP, and HTTPS exposed
- Malware scanning β uploaded files scanned with ClamAV before storage
- Mail server isolation β Stalwart locked to loopback interface, not exposed as an open relay
- Shell-escape enforcement β all user-influenced parameters are properly escaped before reaching shell commands
For the full security policy, see SECURITY.md.
- Chat reliability hardening β survive hard refresh, tab close, and reconnect without losing streamed content or showing stale state
- Clean chat output β strip internal tool noise, approval artifacts, and system metadata from agent responses so conversations read like conversations
- Full OpenClaw feature parity β surface all OpenClaw capabilities (FYI mode, tool approval workflows, new agent features) as they ship upstream
- Agent management UI β create, edit, configure, and delete agents directly from the Agent Tools page
- GitHub integration β push/pull from the project panel
- Team collaboration β multi-user project sharing and permissions
- Email polish β forwarding rules, HTML signatures, folder management
- Mobile-optimized UI β responsive layouts for phone and tablet
Contributions welcome! Please open an issue first to discuss significant changes.
- Fork the repo
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
See CONTRIBUTING.md for full details.
MIT License β see LICENSE.
- OpenClaw β the agent framework powering intelligent features
- Anthropic, OpenAI, Google β AI providers
- Caddy β automatic HTTPS reverse proxy
- Stalwart β mail server
- NoVNC β browser-based VNC client
Built by Robert Bridges
Website Β·
X (Twitter) Β·
Issues Β·
Releases
