You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- reset-dev-codex: RMCP stack on 8002/8003; same seeding + bring-up for Codex/Qodo
136
-
- reset-dev-dual: SSE + RMCP together (8000/8001 and 8002/8003)
137
-
- up / down / logs / ps: Docker Compose lifecycle helpers
138
-
- index / reindex / reindex-hard: Index current repo; `reindex` recreates the collection; `reindex-hard` also clears the local cache so unchanged files are re-uploaded
139
-
- index-here / index-path: Index arbitrary host path without cloning into this repo
140
-
- watch: Watch-and-reindex on file changes
141
-
- warm / health: Warm caches and run health checks
142
-
- hybrid / rerank: Example hybrid search + reranker helper
143
-
- setup-reranker / rerank-local / quantize-reranker: Manage ONNX reranker assets and local runs
144
-
- prune / prune-path: Remove stale points (missing files or hash mismatch)
145
-
- llama-model / tokenizer: Fetch tiny GGUF model and tokenizer.json
146
-
- qdrant-status / qdrant-list / qdrant-prune / qdrant-index-root: Convenience wrappers that route through the MCP bridge to inspect or maintain collections
176
+
-**Setup**: `reset-dev`, `reset-dev-codex`, `reset-dev-dual` - Full stack with SSE, RMCP, or both
Copy file name to clipboardExpand all lines: docs/MEMORY_GUIDE.md
+10-1Lines changed: 10 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -167,5 +167,14 @@ Different hash lengths for different workspace types:
167
167
168
168
## Backup and Migration
169
169
170
-
For production-grade backup/migration strategies, see the official Qdrant documentation for snapshots and export/import. For local development, rely on Docker volumes and reindexing when needed.
For production-grade backup/migration strategies, see the official Qdrant documentation for snapshots and export/import. For local development, rely on Docker volumes and reindexing when needed.
3.**Index workspace**: Click status bar button or run `Context Engine Uploader: Start`
24
+
4.**Use Prompt+**: Select code, click `Prompt+` in status bar to enhance with AI
25
+
17
26
## Features
18
27
19
28
-**Auto-sync**: Force sync on startup + watch mode keeps your workspace indexed
20
29
-**Prompt+ button**: Status bar button to enhance selected text with unicorn mode
21
30
-**Output channel**: Real-time logs for force-sync and watch operations
22
31
-**GPU decoder support**: Configure llama.cpp, Ollama, or GLM as decoder backend
32
+
-**Remote server support**: Index to any Context-Engine server (local, remote, Kubernetes)
33
+
34
+
## Workflow Examples
35
+
36
+
### Local Development
37
+
Context-Engine running on same machine:
38
+
```
39
+
Endpoint: http://localhost:9090
40
+
Target Path: (leave empty - uses current workspace)
41
+
```
42
+
Open any project → extension auto-syncs → MCP tools have your code context.
43
+
44
+
### Remote Server
45
+
Context-Engine on a team server:
46
+
```
47
+
Endpoint: http://context.yourcompany.com:9090
48
+
Target Path: /Users/you/projects/my-app
49
+
```
50
+
Your local code is indexed to the shared server. Team members search across all indexed repos.
51
+
52
+
### Multi-Project Workflow
53
+
Index multiple projects to the same server:
54
+
1. Open Project A → auto-syncs to `codebase` collection
55
+
2. Open Project B → auto-syncs to same collection
56
+
3. MCP tools search across both projects seamlessly
57
+
58
+
### Prompt+ Enhancement
59
+
1. Select code or write a prompt in your editor
60
+
2. Click `Prompt+` in status bar (or run command)
61
+
3. Extension runs `ctx.py --unicorn` with your selection
62
+
4. Enhanced prompt replaces selection with code-grounded context
63
+
64
+
**Example input:**
65
+
```
66
+
Add error handling to the upload function
67
+
```
68
+
69
+
**Example output:**
70
+
```
71
+
Looking at upload_service.py lines 120-180, the upload_file() function currently lacks error handling. Add try/except blocks to handle:
72
+
1. Network timeouts (requests.exceptions.Timeout)
73
+
2. Invalid file paths (FileNotFoundError)
74
+
3. Server errors (HTTP 5xx responses)
75
+
76
+
Reference the existing error patterns in remote_upload_client.py lines 45-67 which use structured logging via logger.error().
77
+
```
23
78
24
79
## Installation
25
80
@@ -85,3 +140,29 @@ All settings live under `Context Engine Uploader` in the VS Code settings UI or
85
140
-`Context Engine Uploader: Prompt+ (Unicorn Mode)` — runs `scripts/ctx.py --unicorn` on your current selection and replaces it with the enhanced prompt (status bar button).
86
141
87
142
The extension logs all subprocess output to the **Context Engine Upload** output channel so you can confirm uploads without leaving VS Code. The watch process shuts down automatically when VS Code exits or when you run the Stop command.
143
+
144
+
## Troubleshooting
145
+
146
+
### Extension not syncing
147
+
1. Check **Context Engine Upload** output channel for errors
148
+
2. Verify `endpoint` setting points to running upload service
149
+
3. Ensure Python 3.8+ is available at configured `pythonPath`
150
+
151
+
### Prompt+ not working
152
+
1. Verify decoder is running: `curl http://localhost:8081/health`
153
+
2. Check `decoderUrl` setting matches your decoder (llama.cpp, Ollama, or GLM)
154
+
3. For GPU decoder: enable `useGpuDecoder` setting
155
+
156
+
### Connection refused
157
+
```bash
158
+
# Verify upload service is running
159
+
curl http://localhost:9090/health
160
+
161
+
# Check Docker logs
162
+
docker compose logs upload_service
163
+
```
164
+
165
+
### Remote server issues
166
+
1. Ensure port 9090 is accessible from your machine
167
+
2. Check firewall rules allow inbound connections
168
+
3. Verify server's `upload_service` container is running
0 commit comments