Agentic code search from the command line.
Ask a question in plain English. Get back file paths and line numbers.
- Who is this for?
- Why probe?
- Install
- Quick start
- Usage
- Config file
- Output formats
- Vim / Neovim integration
- Scoped search with
--stdin - Query history
- Search modes
- Think mode
- Verbose mode
- How it works
- Development
In the age of agentic coding, probe by itself might seem redundant — why not just let Claude Code or Cursor search for you? It turns out probe fills a gap for two kinds of people:
If you're not fully onboard with AI agents — probe gives you the power of agentic search without surrendering your entire workflow to an agent harness. You stay in control. The LLM does the tedious part — figuring out where things are — and you do the actual coding. No chat windows, no approval prompts, just results in your terminal.
If you live in the terminal and vim — this is why probe was built. When you're deep in a vibe-coded project and you've lost track of where things live, probe drops you straight into a quickfix list. :cnext, :cprev, done. It's the missing bridge between "I know what I want" and "I'm staring at the right line of code."
probe can also be called by other tools directly — think of it as a Claude Code skill for cheaper, faster code search. The JSON and quickfix output formats make it easy to integrate into scripts and editor plugins. Though honestly, in its current state, this is more of a future direction than a recommendation.
You know what you're looking for, just not where it is. Grep needs you to know the exact string. IDE search needs you to know the filename. probe just needs the question.
probe "where is the rate limiting logic?"src/middleware/ratelimit.go:18-45 Token bucket implementation with per-IP tracking
src/config/limits.go:3-11 Rate limit constants and defaults
No embeddings. No vector database. No indexing step. probe hands an LLM a few Unix tools — grep, find, read — and lets it search iteratively until it finds what you need.
- Zero setup —
go installand go. No indexing, no config files, no database - Any LLM — works with Ollama locally, or OpenAI/Anthropic/Google/Groq/OpenRouter in the cloud
- Pipe-friendly — stdout has results, stderr has everything else. Plays nice with Unix
- Read-only — probe never writes to your codebase. All paths are sandboxed
- Fast — small models (3B) work great. Most searches finish in under 5 seconds
Requires Go 1.24+ and ripgrep.
go install github.com/newtoallofthis/probe@latestOr grab a binary from Releases.
probe connects to any OpenAI-compatible API. The fastest way to get started is with Ollama running locally:
# Pull a model
ollama pull ministral-3:3b
# Search your codebase
cd your-project
probe "where are the database migrations?"That's it. No config files needed. probe auto-detects your project language, respects .gitignore, and searches from the current directory.
probe supports OpenAI, Anthropic, and Google natively. The provider is auto-detected from your API key prefix, or you can set it explicitly with --provider.
# OpenAI
export PROBE_API_KEY="sk-..."
export PROBE_MODEL="gpt-4o-mini"
# Anthropic (native — no proxy needed)
export PROBE_API_KEY="sk-ant-..."
export PROBE_MODEL="claude-sonnet-4-5-20250929"
# Google Gemini
export PROBE_API_KEY="AI..."
export PROBE_MODEL="gemini-2.0-flash"
export PROBE_PROVIDER="google"
# Any OpenAI-compatible endpoint (Groq, OpenRouter, etc.)
export PROBE_API_KEY="gsk_..."
export PROBE_BASE_URL="https://api.groq.com/openai/v1"
export PROBE_MODEL="llama-3.3-70b-versatile"
probe "how does the rate limiter work?"probe [flags] <query>
| Flag | Description | Default |
|---|---|---|
--model <name> |
Model name | ministral-3:3b |
--base-url <url> |
API base URL | http://localhost:11434/v1 |
--max-turns <n> |
Maximum agent turns | 10 |
--dir <path> |
Project directory to search | . |
--json |
Output results as JSON | |
--format <fmt> |
Output format: human, json, paths, qf |
human |
-m, --mode <mode> |
Search mode: auto, locate, explore, trace |
auto |
-t, --think |
Thorough search mode (more turns, deeper verification) | |
--stdin |
Read file list from stdin to scope the search | |
--provider <name> |
LLM provider: openai, anthropic, google (auto-detected) |
|
-v, --verbose |
Show agent search trace on stderr | |
-q, --quiet |
Suppress all stderr output | |
--list |
List past queries for the current directory | |
--all |
List all past queries across all directories | |
--show <id> |
Show full results of a history entry by ID | |
--version |
Print version and exit |
| Variable | Description |
|---|---|
PROBE_API_KEY |
API key for the LLM provider |
PROBE_MODEL |
Default model name |
PROBE_BASE_URL |
Default API base URL |
PROBE_MAX_TURNS |
Default max turns |
PROBE_MODE |
Default search mode (auto, locate, explore, trace) |
PROBE_PROVIDER |
LLM provider (openai, anthropic, google) |
PROBE_THINK |
Enable think mode (1 or true) |
Precedence: flags > environment variables > config file > defaults.
probe looks for a TOML config file in these locations (first found wins):
.probe.tomlin the project directory — per-project settings$XDG_CONFIG_HOME/probe/config.toml~/.config/probe/config.toml— global defaults
# ~/.config/probe/config.toml
model = "gpt-4o-mini"
base_url = "https://api.openai.com/v1"
api_key_env = "OPENAI_API_KEY" # reads the key from this env var
max_turns = 15
output_format = "human"
show_reasons = true
think = false
mode = "auto" # auto, locate, explore, trace
provider = "" # openai, anthropic, google (auto-detected if empty)Drop a .probe.toml in any project root to override globals for that repo:
# myproject/.probe.toml
model = "llama-3.3-70b-versatile"
max_turns = 20
think = trueWhen stdout is a terminal, results include file paths, line ranges, and reasons with color:
src/middleware/auth.ts:14-58 Defines verifyToken() and requireAdmin()
src/utils/jwt.ts:3-22 JWT signing and verification helpers
When piped, reasons and colors are stripped automatically:
probe "auth middleware" | head -1
# src/middleware/auth.ts:14-58probe --json "auth middleware"{
"results": [
{
"file": "src/middleware/auth.ts",
"start_line": 14,
"end_line": 58,
"reason": "Defines verifyToken() and requireAdmin()"
}
],
"summary": "Found auth middleware and JWT helpers",
"turns": 3
}Deduplicated file paths, one per line. Designed for xargs:
probe --format=paths "auth" | xargs wc -lVim/Neovim quickfix-compatible format (file:line:col: message). Load results directly into your editor's quickfix list:
probe --format=qf "auth middleware" | vim -q /dev/stdinprobe's quickfix format (--format=qf) makes it a first-class citizen in Vim and Neovim.
Set probe as your grepprg and search with :grep as usual:
" In your vimrc / init.vim
set grepprg=probe\ --format=qf
set grepformat=%f:%l:%c:\ %mNow :grep "where is the auth middleware?" populates the quickfix list. Navigate with :cnext / :cprev.
Run probe from command mode and jump straight to results:
:cexpr system('probe --format=qf "error handling"')
:copen# Open vim with probe results in the quickfix list
probe --format=qf "auth middleware" | vim -q /dev/stdin
# Or with Neovim
probe --format=qf "auth middleware" | nvim -q /dev/stdinPipe a file list into probe to restrict the search scope. Useful for searching only staged files, recently changed files, or a specific subset:
# Search only git-staged files
git diff --cached --name-only | probe --stdin "TODO comments"
# Search only files changed in the last week
git log --since="1 week ago" --name-only --format="" | sort -u | probe --stdin "deprecated API calls"
# Search specific files
find src/api -name "*.go" | probe --stdin "where is the rate limiter?"probe automatically saves every search result to a local SQLite database. You can recall past queries without re-running the search.
# List past queries for the current project
probe --list
# List all queries across all projects
probe --all
# Re-display results from a previous search by ID
probe --show 42--list and --all print a table of past queries with their IDs, timestamps, directories, and query text. --show re-renders the full results using your current output format settings.
The history database is stored at ~/.local/share/probe/history.db (or $XDG_DATA_HOME/probe/history.db).
probe supports three search modes that control how the agent approaches your query. By default (--mode auto), the LLM picks the best mode automatically on the first turn.
| Mode | When to use | Behavior |
|---|---|---|
locate |
"Where is X?" | Fast. Finds the file/function and submits in 1-3 turns. |
explore |
"How does X work?" | Thorough. Reads multiple files, synthesizes across the codebase. |
trace |
"Follow the call chain from X to Y" | Sequential. Follows references through the dependency chain. |
# Let the LLM choose (default)
probe "where is the auth middleware?"
# Force a specific mode
probe -m locate "where is main?"
probe -m explore "how does the billing system work?"
probe -m trace "follow a request from the HTTP handler to the database"Each mode injects turn-aware pressure — the agent sees [Turn N — M remaining] each iteration and adjusts its depth accordingly. Locate mode nudges early submission; explore and trace modes allow deeper investigation before applying pressure.
For complex questions that need deeper investigation, use -t / --think:
probe -t "how does the billing system calculate prorated charges?"Think mode gives the agent more turns and encourages it to verify findings by cross-referencing related files — imports, configs, tests — before submitting an answer. It's slower but more thorough.
See what the agent is doing in real time:
probe -v "where is main?"⠋ Searching...
├─ grep func\s+main
│ → 2 matches
├─ read main.go:1-50
│ → 50 lines
│ tokens: +512/+128 (total: 640)
├─ submit_answer
✓ Done (3 turns, 1.8s)
main.go:28-36 CLI entrypoint and orchestration
| Code | Meaning |
|---|---|
0 |
Results found |
1 |
No results found |
2 |
Error (bad config, API unreachable, etc.) |
Follows grep conventions, so probe works naturally in scripts:
probe "auth" && echo "found" || echo "nothing"probe is an agent loop. It sends your query to an LLM along with your project's directory tree, then lets the LLM iteratively call search tools until it finds what you asked for:
User query → Mode selection (auto: LLM picks locate/explore/trace)
│
▼
System prompt (project tree, language hints, mode strategy)
│
▼
Agent loop (up to --max-turns):
│
├─ [Turn N — M remaining] injected
├─ LLM picks tools to call
├─ Tools run in parallel
├─ Results fed back to LLM
├─ Mode-specific pressure applied
└─ Repeat until submit_answer
│
▼
Format and print results
The LLM has five tools:
| Tool | Purpose |
|---|---|
grep |
Search file contents with ripgrep (regex, globs) |
find_files |
Discover files/directories by pattern |
read_file |
Read file contents with line numbers |
tree |
List directory tree |
submit_answer |
Return final results with file locations |
All tools are read-only and sandboxed to the project directory. Results are filtered through .gitignore and truncated with feedback so the LLM knows its view is partial.
See Architecture for project structure. Requires just as a task runner.
just build # compile the binary
just test # run all tests
just test-one Name # run a single test
just ci # fmt + vet + test
just run "query" # build and run
just run-v "query" # run with verboseMIT
