A distributed platform that enables users to share GPU resources and AI models. "Givers" can share their local Ollama models, and "Takers" can access them through secure tunneling.
- Live App: https://tunnelmind.harshkeshri.com
- NPM Package: https://www.npmjs.com/package/tunnelmind
TunnelMind ships with a CLI that lets you authenticate, start the local relay, and publish your Ollama models to the cloud.
npm install -g tunnelmindPrerequisites:
- Node.js 18+
- Ollama installed and running locally (
ollama serve) - At least one model pulled via
ollama pull <model>
Authenticate the CLI:
tunnelmind loginStart sharing your local models:
tunnelmind serverOptional flags:
--cloud-urlto point at a different TunnelMind cloud server--ollama-urlto target a custom Ollama endpoint
Check the current session:
tunnelmind userLog out and clear the local session:
tunnelmind logoutcloud-server/– Express + MongoDB backend managing givers, takers, chats, and the WebSocket relay.local-server/– CLI client that connects a giver’s Ollama instance to the cloud server.frontend/– Next.js frontend for takers and givers to manage sessions, browse models, and chat.
Install dependencies for all packages:
pnpm install:allStart individual services:
# Cloud server
eval "cd cloud-server && pnpm dev"
# Local CLI (debug mode)
eval "cd local-server && pnpm dev:cli"
# Frontend
eval "cd frontend && pnpm dev"Pull requests and feature suggestions are welcome. Please open an issue describing your proposal before submitting large changes.
MIT © TunnelMind