Skip to content

Commit 7dbc869

Browse files
authored
Merge pull request #30 from m1rl0k/documentation
Documentation
2 parents 936b8d8 + 5f62c41 commit 7dbc869

File tree

6 files changed

+236
-49
lines changed

6 files changed

+236
-49
lines changed

README.md

Lines changed: 83 additions & 43 deletions
Original file line numberDiff line numberDiff line change
@@ -42,44 +42,86 @@ Context-Engine is a plug-and-play MCP retrieval stack that unifies code indexing
4242

4343
> **See [docs/IDE_CLIENTS.md](docs/IDE_CLIENTS.md) for detailed configuration examples.**
4444
45-
## Quickstart (5 minutes)
4645

47-
This gets you from zero to “search works” in under five minutes.
46+
## Getting Started
4847

49-
1) Prereqs
50-
- Docker + Docker Compose
51-
- make (optional but recommended)
52-
- Node/npm if you want to use mcp-remote (optional)
48+
### Option 1: Deploy & Connect (Recommended)
5349

54-
2) command (recommended)
50+
Deploy Context-Engine once, connect any IDE. No need to clone this repo into your project.
51+
52+
**1. Start the stack** (on your dev machine or a server):
5553
```bash
56-
# Provisions tokenizer.json, downloads a tiny llama.cpp model, reindexes, and brings all services up
57-
INDEX_MICRO_CHUNKS=1 MAX_MICRO_CHUNKS_PER_FILE=200 make reset-dev-dual
54+
git clone https://github.com/m1rl0k/Context-Engine.git && cd Context-Engine
55+
docker compose up -d
56+
```
57+
58+
**2. Index your codebase** (point to any project):
59+
```bash
60+
HOST_INDEX_PATH=/path/to/your/project docker compose run --rm indexer
61+
```
62+
63+
**3. Connect your IDE** — add to your MCP config:
64+
```json
65+
{
66+
"mcpServers": {
67+
"context-engine": { "url": "http://localhost:8001/sse" }
68+
}
69+
}
70+
```
71+
72+
> See [docs/IDE_CLIENTS.md](docs/IDE_CLIENTS.md) for Cursor, Windsurf, Cline, Codex, and other client configs.
73+
74+
### Option 2: Remote Deployment
75+
76+
Run Context-Engine on a server and connect from anywhere.
77+
78+
**Docker on a server:**
79+
```bash
80+
# On server (e.g., context.yourcompany.com)
81+
git clone https://github.com/m1rl0k/Context-Engine.git && cd Context-Engine
82+
docker compose up -d
5883
```
84+
85+
**Index from your local machine:**
5986
```bash
60-
# Provisions the context-engine for rapid development,
61-
HOST_INDEX_PATH=. COLLECTION_NAME=codebase docker compose run --rm indexer --root /work --recreate --no-skip-unchanged
87+
# VS Code extension (recommended) - install, set server URL, click "Upload Workspace"
88+
# Or CLI:
89+
scripts/remote_upload_client.py --server http://context.yourcompany.com:9090 --path /your/project
90+
```
91+
92+
**Connect IDE to remote:**
93+
```json
94+
{ "mcpServers": { "context-engine": { "url": "http://context.yourcompany.com:8001/sse" } } }
6295
```
6396

64-
- Default ports: Memory MCP :8000, Indexer MCP :8001, 8003, Qdrant :6333, llama.cpp :8080
65-
66-
**Seamless Setup Note:**
67-
- The stack uses a **single unified `codebase` collection** by default
68-
- All your code goes into one collection for seamless cross-repo search
69-
- No per-workspace fragmentation - search across everything at once
70-
- Health checks auto-detect and fix cache/collection sync issues
71-
- Just run `make reset-dev-dual` on any machine and it works™
72-
73-
### Make targets: SSE, RMCP, and dual-compat
74-
- Legacy SSE only (default):
75-
- Ports: 8000 (/sse), 8001 (/sse)
76-
- Command: `INDEX_MICRO_CHUNKS=1 MAX_MICRO_CHUNKS_PER_FILE=200 make reset-dev`
77-
- RMCP (Codex) only:
78-
- Ports: 8002 (/mcp), 8003 (/mcp)
79-
- Command: `INDEX_MICRO_CHUNKS=1 MAX_MICRO_CHUNKS_PER_FILE=200 make reset-dev-codex`
80-
- Dual compatibility (SSE + RMCP together):
81-
- Ports: 8000/8001 (/sse) and 8002/8003 (/mcp)
82-
- Command: `INDEX_MICRO_CHUNKS=1 MAX_MICRO_CHUNKS_PER_FILE=200 make reset-dev-dual`
97+
**Kubernetes:** See [deploy/kubernetes/README.md](deploy/kubernetes/README.md) for Kustomize deployment.
98+
99+
### Option 3: Full Development Setup
100+
101+
For contributors or advanced customization with LLM decoder:
102+
103+
```bash
104+
INDEX_MICRO_CHUNKS=1 MAX_MICRO_CHUNKS_PER_FILE=200 make reset-dev-dual
105+
```
106+
107+
### Default Endpoints
108+
109+
| Service | Port | Use |
110+
|---------|------|-----|
111+
| Indexer MCP | 8001 (SSE), 8003 (RMCP) | Code search, context retrieval |
112+
| Memory MCP | 8000 (SSE), 8002 (RMCP) | Knowledge storage |
113+
| Qdrant | 6333 | Vector database |
114+
| llama.cpp | 8080 | Local LLM decoder |
115+
116+
**Stack behavior:**
117+
- Single `codebase` collection — search across all indexed repos
118+
- Health checks auto-detect and fix cache/collection sync
119+
- Live file watching with automatic reindexing
120+
121+
### Transport Modes
122+
- **SSE** (default): `http://localhost:8001/sse` — Cursor, Cline, Windsurf, Augment
123+
- **RMCP**: `http://localhost:8003/mcp` — Codex, Qodo
124+
- **Dual**: Both SSE + RMCP simultaneously (`make reset-dev-dual`)
83125

84126
### Environment Setup
85127

@@ -131,19 +173,17 @@ docker compose up -d --force-recreate mcp_indexer mcp_indexer_http llamacpp
131173
This re-enables the `llamacpp` container and resets `.env` to `http://llamacpp:8080`.
132174

133175
### Make targets (quick reference)
134-
- reset-dev: SSE stack on 8000/8001; seeds Qdrant, downloads tokenizer + tiny llama.cpp model, reindexes, brings up memory + indexer + watcher
135-
- reset-dev-codex: RMCP stack on 8002/8003; same seeding + bring-up for Codex/Qodo
136-
- reset-dev-dual: SSE + RMCP together (8000/8001 and 8002/8003)
137-
- up / down / logs / ps: Docker Compose lifecycle helpers
138-
- index / reindex / reindex-hard: Index current repo; `reindex` recreates the collection; `reindex-hard` also clears the local cache so unchanged files are re-uploaded
139-
- index-here / index-path: Index arbitrary host path without cloning into this repo
140-
- watch: Watch-and-reindex on file changes
141-
- warm / health: Warm caches and run health checks
142-
- hybrid / rerank: Example hybrid search + reranker helper
143-
- setup-reranker / rerank-local / quantize-reranker: Manage ONNX reranker assets and local runs
144-
- prune / prune-path: Remove stale points (missing files or hash mismatch)
145-
- llama-model / tokenizer: Fetch tiny GGUF model and tokenizer.json
146-
- qdrant-status / qdrant-list / qdrant-prune / qdrant-index-root: Convenience wrappers that route through the MCP bridge to inspect or maintain collections
176+
- **Setup**: `reset-dev`, `reset-dev-codex`, `reset-dev-dual` - Full stack with SSE, RMCP, or both
177+
- **Lifecycle**: `up`, `down`, `logs`, `ps`, `restart`, `rebuild`
178+
- **Indexing**: `index`, `reindex`, `reindex-hard`, `index-here`, `index-path`
179+
- **Watch**: `watch` (local), `watch-remote` (upload to remote server)
180+
- **Maintenance**: `prune`, `prune-path`, `warm`, `health`, `decoder-health`
181+
- **Search**: `hybrid`, `rerank`, `rerank-local`
182+
- **LLM**: `llama-model`, `tokenizer`, `llamacpp-up`, `setup-reranker`, `quantize-reranker`
183+
- **MCP Tools**: `qdrant-status`, `qdrant-list`, `qdrant-prune`, `qdrant-index-root`
184+
- **Remote**: `dev-remote-up`, `dev-remote-down`, `dev-remote-bootstrap`
185+
- **Router**: `route-plan`, `route-run`, `router-eval`, `router-smoke`
186+
- **CLI**: `ctx Q="your question"` - Prompt enhancement with repo context
147187

148188

149189
### CLI: ctx prompt enhancer

docs/ARCHITECTURE.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -122,6 +122,12 @@ Context Engine is a production-ready MCP (Model Context Protocol) retrieval stac
122122
- **Local LLM Integration**: llama.cpp for offline expansion
123123
- **Caching**: Expanded query results cached for reuse
124124

125+
#### MCP Router (`scripts/mcp_router.py`)
126+
- **Intent Classification**: Determines which MCP tool to call based on query
127+
- **Tool Orchestration**: Routes to search, answer, memory, or index tools
128+
- **HTTP Execution**: Executes tools via RMCP/HTTP without extra dependencies
129+
- **Plan Mode**: Preview tool selection without execution
130+
125131
## Data Flow Architecture
126132

127133
### Search Request Flow

docs/DEVELOPMENT.md

Lines changed: 11 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -73,15 +73,22 @@ Context-Engine/
7373
├── scripts/ # Core application code
7474
│ ├── mcp_memory_server.py # Memory MCP server implementation
7575
│ ├── mcp_indexer_server.py # Indexer MCP server implementation
76+
│ ├── mcp_router.py # Intent-based tool routing
7677
│ ├── hybrid_search.py # Search algorithm implementation
78+
│ ├── ctx.py # CLI prompt enhancer
7779
│ ├── cache_manager.py # Unified caching system
7880
│ ├── async_subprocess_manager.py # Process management
7981
│ ├── deduplication.py # Request deduplication
8082
│ ├── semantic_expansion.py # Query expansion
81-
│ ├── utils.py # Shared utilities
82-
│ ├── ingest_code.py # Code indexing logic
83-
│ ├── watch_index.py # File system watcher
84-
│ └── logger.py # Structured logging
83+
│ ├── collection_health.py # Cache/collection sync checks
84+
│ ├── utils.py # Shared utilities
85+
│ ├── ingest_code.py # Code indexing logic
86+
│ ├── watch_index.py # File system watcher
87+
│ ├── upload_service.py # Remote upload HTTP service
88+
│ ├── remote_upload_client.py # Remote sync client
89+
│ ├── memory_backup.py # Memory export
90+
│ ├── memory_restore.py # Memory import
91+
│ └── logger.py # Structured logging
8592
├── tests/ # Test suite
8693
│ ├── conftest.py # Test configuration
8794
│ ├── test_*.py # Unit and integration tests

docs/IDE_CLIENTS.md

Lines changed: 45 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,20 +1,39 @@
11
# IDE & Client Configuration
22

3-
Configuration examples for connecting various IDEs and MCP clients to Context Engine.
3+
Connect your IDE to a running Context-Engine stack. No need to clone this repo into your project.
44

55
**Documentation:** [README](../README.md) · [Configuration](CONFIGURATION.md) · [IDE Clients](IDE_CLIENTS.md) · [MCP API](MCP_API.md) · [ctx CLI](CTX_CLI.md) · [Memory Guide](MEMORY_GUIDE.md) · [Architecture](ARCHITECTURE.md) · [Multi-Repo](MULTI_REPO_COLLECTIONS.md) · [Kubernetes](../deploy/kubernetes/README.md) · [VS Code Extension](vscode-extension.md) · [Troubleshooting](TROUBLESHOOTING.md) · [Development](DEVELOPMENT.md)
66

77
---
88

99
**On this page:**
10+
- [Quick Start](#quick-start)
1011
- [Supported Clients](#supported-clients)
1112
- [SSE Clients](#sse-clients-port-80008001)
1213
- [RMCP Clients](#rmcp-clients-port-80028003)
1314
- [Mixed Transport](#mixed-transport-examples)
15+
- [Remote Server](#remote-server)
1416
- [Verification](#verification)
1517

1618
---
1719

20+
## Quick Start
21+
22+
**Prerequisites:** Context-Engine running somewhere (localhost, remote server, or Kubernetes).
23+
24+
**Minimal config** — add to your IDE's MCP settings:
25+
```json
26+
{
27+
"mcpServers": {
28+
"context-engine": { "url": "http://localhost:8001/sse" }
29+
}
30+
}
31+
```
32+
33+
Replace `localhost` with your server IP/hostname for remote setups.
34+
35+
---
36+
1837
## Supported Clients
1938

2039
| Client | Transport | Notes |
@@ -169,6 +188,31 @@ url = "http://127.0.0.1:8003/mcp"
169188

170189
---
171190

191+
## Remote Server
192+
193+
When Context-Engine runs on a remote server (e.g., `context.yourcompany.com`):
194+
195+
```json
196+
{
197+
"mcpServers": {
198+
"context-engine": { "url": "http://context.yourcompany.com:8001/sse" }
199+
}
200+
}
201+
```
202+
203+
**Indexing your local project to the remote server:**
204+
```bash
205+
# Using VS Code extension (recommended)
206+
# Install vscode-context-engine, configure server URL, click "Upload Workspace"
207+
208+
# Using CLI
209+
scripts/remote_upload_client.py --server http://context.yourcompany.com:9090 --path /your/project
210+
```
211+
212+
> See [docs/MULTI_REPO_COLLECTIONS.md](MULTI_REPO_COLLECTIONS.md) for multi-repo and Kubernetes deployment.
213+
214+
---
215+
172216
## Important Notes for IDE Agents
173217

174218
- **Do not send null values** to MCP tools. Omit the field or pass an empty string "" instead.

docs/MEMORY_GUIDE.md

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -167,5 +167,14 @@ Different hash lengths for different workspace types:
167167

168168
## Backup and Migration
169169

170-
For production-grade backup/migration strategies, see the official Qdrant documentation for snapshots and export/import. For local development, rely on Docker volumes and reindexing when needed.
170+
### Memory Backup/Restore Scripts
171+
172+
```bash
173+
# Export memories to JSON
174+
python scripts/memory_backup.py --collection codebase --output memories.json
171175

176+
# Restore memories from backup
177+
python scripts/memory_restore.py --input memories.json --collection codebase
178+
```
179+
180+
For production-grade backup/migration strategies, see the official Qdrant documentation for snapshots and export/import. For local development, rely on Docker volumes and reindexing when needed.

docs/vscode-extension.md

Lines changed: 81 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,19 +7,74 @@ Context Engine Uploader extension for automatic workspace sync and Prompt+ integ
77
---
88

99
**On this page:**
10+
- [Quick Start](#quick-start)
1011
- [Features](#features)
12+
- [Workflow Examples](#workflow-examples)
1113
- [Installation](#installation)
1214
- [Configuration](#configuration)
1315
- [Commands](#commands-and-lifecycle)
1416

1517
---
1618

19+
## Quick Start
20+
21+
1. **Install**: Build the `.vsix` and install in VS Code (see [Installation](#installation))
22+
2. **Configure server**: Settings → `contextEngineUploader.endpoint``http://localhost:9090` (or remote server)
23+
3. **Index workspace**: Click status bar button or run `Context Engine Uploader: Start`
24+
4. **Use Prompt+**: Select code, click `Prompt+` in status bar to enhance with AI
25+
1726
## Features
1827

1928
- **Auto-sync**: Force sync on startup + watch mode keeps your workspace indexed
2029
- **Prompt+ button**: Status bar button to enhance selected text with unicorn mode
2130
- **Output channel**: Real-time logs for force-sync and watch operations
2231
- **GPU decoder support**: Configure llama.cpp, Ollama, or GLM as decoder backend
32+
- **Remote server support**: Index to any Context-Engine server (local, remote, Kubernetes)
33+
34+
## Workflow Examples
35+
36+
### Local Development
37+
Context-Engine running on same machine:
38+
```
39+
Endpoint: http://localhost:9090
40+
Target Path: (leave empty - uses current workspace)
41+
```
42+
Open any project → extension auto-syncs → MCP tools have your code context.
43+
44+
### Remote Server
45+
Context-Engine on a team server:
46+
```
47+
Endpoint: http://context.yourcompany.com:9090
48+
Target Path: /Users/you/projects/my-app
49+
```
50+
Your local code is indexed to the shared server. Team members search across all indexed repos.
51+
52+
### Multi-Project Workflow
53+
Index multiple projects to the same server:
54+
1. Open Project A → auto-syncs to `codebase` collection
55+
2. Open Project B → auto-syncs to same collection
56+
3. MCP tools search across both projects seamlessly
57+
58+
### Prompt+ Enhancement
59+
1. Select code or write a prompt in your editor
60+
2. Click `Prompt+` in status bar (or run command)
61+
3. Extension runs `ctx.py --unicorn` with your selection
62+
4. Enhanced prompt replaces selection with code-grounded context
63+
64+
**Example input:**
65+
```
66+
Add error handling to the upload function
67+
```
68+
69+
**Example output:**
70+
```
71+
Looking at upload_service.py lines 120-180, the upload_file() function currently lacks error handling. Add try/except blocks to handle:
72+
1. Network timeouts (requests.exceptions.Timeout)
73+
2. Invalid file paths (FileNotFoundError)
74+
3. Server errors (HTTP 5xx responses)
75+
76+
Reference the existing error patterns in remote_upload_client.py lines 45-67 which use structured logging via logger.error().
77+
```
2378

2479
## Installation
2580

@@ -85,3 +140,29 @@ All settings live under `Context Engine Uploader` in the VS Code settings UI or
85140
- `Context Engine Uploader: Prompt+ (Unicorn Mode)` — runs `scripts/ctx.py --unicorn` on your current selection and replaces it with the enhanced prompt (status bar button).
86141

87142
The extension logs all subprocess output to the **Context Engine Upload** output channel so you can confirm uploads without leaving VS Code. The watch process shuts down automatically when VS Code exits or when you run the Stop command.
143+
144+
## Troubleshooting
145+
146+
### Extension not syncing
147+
1. Check **Context Engine Upload** output channel for errors
148+
2. Verify `endpoint` setting points to running upload service
149+
3. Ensure Python 3.8+ is available at configured `pythonPath`
150+
151+
### Prompt+ not working
152+
1. Verify decoder is running: `curl http://localhost:8081/health`
153+
2. Check `decoderUrl` setting matches your decoder (llama.cpp, Ollama, or GLM)
154+
3. For GPU decoder: enable `useGpuDecoder` setting
155+
156+
### Connection refused
157+
```bash
158+
# Verify upload service is running
159+
curl http://localhost:9090/health
160+
161+
# Check Docker logs
162+
docker compose logs upload_service
163+
```
164+
165+
### Remote server issues
166+
1. Ensure port 9090 is accessible from your machine
167+
2. Check firewall rules allow inbound connections
168+
3. Verify server's `upload_service` container is running

0 commit comments

Comments
 (0)