You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+41Lines changed: 41 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -186,6 +186,47 @@ This re-enables the `llamacpp` container and resets `.env` to `http://llamacpp:8
186
186
- llama-model / tokenizer: Fetch tiny GGUF model and tokenizer.json
187
187
- qdrant-status / qdrant-list / qdrant-prune / qdrant-index-root: Convenience wrappers that route through the MCP bridge to inspect or maintain collections
188
188
189
+
190
+
### CLI: ctx prompt enhancer
191
+
192
+
A thin CLI that retrieves code context and rewrites your input into a better, context-aware prompt using the local LLM decoder. By default it prints ONLY the improved prompt.
193
+
194
+
Examples:
195
+
````bash
196
+
# Default: print only the improved prompt (uses Docker llama.cpp on port 8080)
197
+
scripts/ctx.py "Explain the caching logic to me in detail"
198
+
199
+
# Via Make target (default improved prompt only)
200
+
make ctx Q="Explain the caching logic to me in detail"
0 commit comments