Save, summarize, and continue AI conversations across any LLM.
Self-hosted · 100% local · No cloud APIs · Your data stays on your device.
Install · How It Works · Extension · Tech Stack · API
- Save conversations from ChatGPT, Claude, and more — via browser extension or paste
- Summarize automatically using a local LLM (Ollama)
- Store summaries in a searchable context library
- Generate structured continuation prompts
- Continue the conversation in any other LLM — with full context
No accounts. No API keys. No data collection. Everything runs on your machine.
Option A — Installer
Download ContextVolt-Setup.exe from the Releases page → run it → launch from Desktop shortcut.
Option B — From source
git clone https://github.com/Rithvickkr/ContextVolt.git
cd ContextVolt
start.bat
curl -fsSL https://raw.githubusercontent.com/Rithvickkr/ContextVolt/main/install.sh | bashThis clones the project to ~/.contextvolt and creates a contextvolt command. To launch anytime:
contextvoltTo update:
cd ~/.contextvolt && git pull| Required | Auto-installed | |
|---|---|---|
| Python 3.10+ | ✅ python.org or brew install python |
— |
| Git | ✅ Pre-installed on macOS. Windows: git-scm.com | — |
| Ollama | — | ✅ Downloaded & installed automatically |
| AI Model (phi3) | — | ✅ Pulled automatically (~600MB) |
The browser extension lets you capture full conversations from ChatGPT and Claude with one click.
Supported browsers: Chrome, Edge, Brave, Arc (Chromium-based)
- Open
chrome://extensions(oredge://extensions) - Enable Developer Mode (toggle in top-right)
- Click Load unpacked
- Select the
extension/folder from your ContextVolt directory
macOS users: Safari is not supported. Use Chrome or any Chromium browser — most Mac developers already have one installed.
Use your saved contexts directly in VS Code.
- Open the
vscode-extension/folder in VS Code - Press F5 to launch the Extension Development Host
- Click the 📖 icon in the sidebar to browse your contexts
- Click any context to insert the continuation prompt at your cursor
| Component | Technology |
|---|---|
| Backend | Python, FastAPI |
| Frontend | Vanilla HTML/CSS/JS, Dark theme with glassmorphism |
| Desktop Window | PyWebView (native OS WebView) |
| Database | SQLite |
| Local LLM | Ollama (phi3) |
| Browser Extension | Chrome Manifest V3 |
ContextVolt runs a local FastAPI server on http://127.0.0.1:8000.
| Method | Endpoint | Description |
|---|---|---|
GET |
/api/health |
Health check |
GET |
/api/setup/status |
Setup wizard status |
POST |
/api/setup/pull-model |
Trigger model download |
POST |
/api/summarize |
Summarize conversation via Ollama |
POST |
/api/contexts |
Save a new context |
GET |
/api/contexts |
List all contexts (supports ?q= search) |
GET |
/api/contexts/{id} |
Get a single context |
PUT |
/api/contexts/{id} |
Update a context |
DELETE |
/api/contexts/{id} |
Delete a context |
POST |
/api/contexts/{id}/prompt |
Generate continuation prompt |
GET |
/api/contexts/{id}/export |
Export as Markdown |
To use a different LLM model, edit OLLAMA_MODEL in installer.py:
OLLAMA_MODEL = "mistral" # or "llama3", "gemma2", "phi3", etc.ContextVolt/
├── start.bat # Windows launcher
├── start.sh # macOS / Linux launcher
├── install.sh # One-line Mac/Linux installer (curl | bash)
├── run.py # App entry point (FastAPI + PyWebView)
├── installer.py # GUI setup wizard (cross-platform)
├── requirements.txt
├── backend/
│ ├── main.py # FastAPI server + routes
│ ├── database.py # SQLite operations
│ ├── ollama_client.py # Ollama API client
│ └── models.py # Pydantic schemas
├── frontend/
│ ├── index.html # Dashboard
│ ├── installer.html # Setup wizard UI
│ ├── css/ # Styles (dark theme + glassmorphism)
│ └── js/app.js # SPA logic
├── extension/ # Chrome / Edge browser extension
├── vscode-extension/ # VS Code sidebar extension
└── installer/ # Windows .exe build (Inno Setup)
MIT
Built with ❤️ by Rithvick