Use Cursor as an OpenAI-compatible LLM backend for Ironclaw.
ironclaw-cursor-brain is an OpenAiCompletions provider for Ironclaw. It wraps the Cursor Agent (subprocess) as an OpenAI Chat Completions–compatible HTTP service. Add one entry to ~/.ironclaw/providers.json and use it like built-in providers (groq, openai) — no Ironclaw source changes.
- OpenAI-compatible API —
POST /v1/chat/completionswith streaming (SSE) and non-streaming - Session continuity — Same
X-Session-Idresumes Cursor conversations; mappings persisted under~/.ironclaw/ - Zero config by default — Optional
~/.ironclaw/cursor-brain.json; env vars override - Cross-platform — Windows, macOS, Linux (Rust + cursor-agent)
Behavior vs OpenAiCompletions: The plugin sends the full conversation to cursor-agent as a single synthesized prompt (System / User / Assistant / Tool result blocks). cursor-agent only accepts one prompt (stdin); tools / tool_choice are accepted in the request but not sent to the agent. See the Guide for details.
New to the project? The Guide walks you through architecture, configuration, API, and Ironclaw integration in one place.
| Language | Document |
|---|---|
| English | Guide (EN) — Introduction, quick start, core concepts, request lifecycle, configuration, sessions, API reference, Ironclaw integration. |
| 中文 | 使用指南(中文) — 简介、快速开始、核心概念、请求生命周期、配置、会话、API 参考、Ironclaw 集成。 |
| — | Provider definition — JSON to merge into ~/.ironclaw/providers.json. |
- Quick start
- Documentation
- Installation
- Configuration
- Run & validate
- Register as Ironclaw provider
- Session continuity
- API
- License & references
The Guide (EN) and 使用指南(中文) are the main technical docs: tutorial-style, from introduction through configuration, sessions, API, and Ironclaw integration. For the provider JSON only: provider-definition.json.
If you already have Rust, Cursor (for cursor-agent), and Ironclaw set up:
cargo install ironclaw-cursor-brain
ironclaw-cursor-brainAdd a provider entry to ~/.ironclaw/providers.json (or %USERPROFILE%\.ironclaw\providers.json on Windows), then use the Cursor backend from Ironclaw.
Requirements: Rust (stable), cursor-agent (from Cursor or PATH). For the full stack, install in order: PostgreSQL 15+ with pgvector, Ironclaw, then this plugin. Steps below cover Windows, macOS, and Linux.
Ironclaw requires PostgreSQL 15+ and the pgvector extension.
- Windows: Install PostgreSQL 15+ (e.g. EDB installer). Then install pgvector: ensure Visual Studio Build Tools and
pg_config(from the PostgreSQL bin directory) are on PATH, clone pgvector, then runnmake /F Makefile.winandnmake /F Makefile.win installin the pgvector directory. Restart PostgreSQL. - macOS:
brew install postgrest@15(or download App from https://postgresapp.com/downloads.html). Install pgvector:brew install pgvectorif available, or clone pgvector and runmake && make install(ensurepg_configis on PATH). Start PostgreSQL (e.g.brew services start postgresql@15). - Linux: Install PostgreSQL 15+ via your distro (e.g.
sudo apt install postgresql-15on Debian/Ubuntu, or PostgreSQL docs). Then clone pgvector and runmake && sudo make install. Start the PostgreSQL service.
One-time database setup (for Ironclaw):
createdb ironclaw
psql ironclaw -c "CREATE EXTENSION IF NOT EXISTS vector;"See Ironclaw README for more detail.
- Windows: Download the Windows installer (MSI) and run it, or use the PowerShell script:
irm https://github.com/nearai/ironclaw/releases/latest/download/ironclaw-installer.ps1 | iex - macOS / Linux: Run the shell installer:
curl --proto '=https' --tlsv1.2 -LsSf https://github.com/nearai/ironclaw/releases/latest/download/ironclaw-installer.sh | sh, or install via Homebrew:brew install ironclaw. Alternatively, clone the Ironclaw repo and runcargo build --release.
Then run ironclaw onboard to configure database and auth. See Ironclaw Releases and README.
- All platforms: Install from crates.io:
cargo install ironclaw-cursor-brain. The binary is installed to your Cargobindirectory (usually on PATH). Ensure cursor-agent is available (install Cursor or put the agent binary on PATH). To build from source:git clone https://github.com/nearai/ironclaw-cursor-brain.git && cd ironclaw-cursor-brain && cargo build --release.
Plugin configuration reuses Ironclaw’s layout: all config lives under the same base directory as Ironclaw (same resolution as Ironclaw: dirs crate — ~/.ironclaw/ on macOS/Linux, user profile on Windows). Optional plugin config file: cursor-brain.json in that directory. Provider registration is done via providers.json in the same directory (same as Ironclaw’s built-in providers). After running ironclaw onboard, add this plugin’s provider entry to that file (see “Register as an Ironclaw provider” below).
- Source: Environment variables first; optional file
~/.ironclaw/cursor-brain.json(env overrides file). - Options:
cursor_path: Path to cursor-agent; unset = detect from PATH or platform pathsport: Listen port, default 3001 (Ironclaw convention: Web Gateway 3000 + 1)request_timeout_sec: Per-request timeout (seconds), default 300session_cache_max: Session mapping LRU capacity, default 1000session_header_name: HTTP header name for external session id, defaultx-session-id(e.g.X-Session-Id)default_model: Default model when request omitsmodel; unset ="auto"fallback_model: If primary model returns no content, retry once with this model (non-stream only)
Env vars: CURSOR_PATH, PORT or IRONCLAW_CURSOR_BRAIN_PORT, REQUEST_TIMEOUT_SEC, SESSION_CACHE_MAX, SESSION_HEADER_NAME, CURSOR_BRAIN_DEFAULT_MODEL, CURSOR_BRAIN_FALLBACK_MODEL.
- Log level: Set
RUST_LOG(defaultinfoif unset). Examples:RUST_LOG=debug ironclaw-cursor-brainorRUST_LOG=ironclaw_cursor_brain=debug,tower_http=infoto raise only this crate to debug. Levels:error,warn,info,debug,trace.
Session mappings are always persisted to ~/.ironclaw/cursor-brain-sessions.json (fixed path, not configurable).
ironclaw-cursor-brain # listens on http://0.0.0.0:3001 (or cargo run if built from source)In another terminal:
curl http://127.0.0.1:3001/v1/health
curl -X POST http://127.0.0.1:3001/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model":"cursor-default","messages":[{"role":"user","content":"hi"}],"stream":false}'If you see 503 "cursor-agent returned no content": The plugin logs cursor-agent stderr (cursor_agent_stderr). The plugin maps request model cursor to auto for cursor-agent. If Ironclaw logs "Failed to parse user providers.json", set protocol to "open_ai_completions" (snake_case), not OpenAiCompletions.
Ironclaw has no separate manifest or installer for third-party providers. The only way to register is to add one full ProviderDefinition object to the JSON array in ~/.ironclaw/providers.json. That file is merged with Ironclaw’s built-in providers.json at load time.
- How does it show up in the config flow? The definition must include the
setupfield. The wizard only shows providers that havesetup; entries without it do not appear in the “Choose LLM provider” list. - What does
setup.can_list_models: truedo? In the “Select model” step, Ironclaw calls the provider’s GET /v1/models (usingdefault_base_url) and shows the returned list for the user to pick from. This plugin implements that endpoint (viacursor-agent --list-models), socan_list_modelsshould betrue. - Who sets it, where is it stored? The user adds the entry by editing
~/.ironclaw/providers.jsonlocally; it lives only in that file.
If the file does not exist (e.g. right after ironclaw onboard), create it with content [], then merge the entry below into that array. Do not omit the required model_env or setup or the provider won’t appear in the wizard or may fail to parse. After that you can choose Cursor Brain in the LLM step and pick a model from the list in the next step.
| Field | Required | Description |
|---|---|---|
| id | ✓ | Unique id for LLM_BACKEND, e.g. "cursor" |
| protocol | ✓ | Must be "open_ai_completions" (snake_case) |
| model_env | ✓ | Env var name for model, e.g. "CURSOR_BRAIN_MODEL"; required by Ironclaw |
| default_model | ✓ | Default model, use "auto" for cursor-agent |
| description | ✓ | One-line description for UI |
| aliases | Aliases for LLM_BACKEND, e.g. ["cursor_brain","cursor-brain"] |
|
| default_base_url | Default service URL (include /v1), e.g. http://127.0.0.1:3001/v1 |
|
| base_url_env | Env var to override base URL, e.g. "CURSOR_BRAIN_BASE_URL" |
|
| base_url_required | Whether base URL is required; use false for this plugin |
|
| api_key_required | Whether API key is required; use false for this plugin |
|
| setup | Required to be installable. Wizard hint; kind: "open_ai_compatible" shows “Cursor Brain” and asks for Base URL; can_list_models: true makes the wizard call GET /v1/models so the user can pick a model |
You can copy the object from doc/provider-definition.json and merge it into your existing ~/.ironclaw/providers.json array; or use this full JSON (if the file already exists, merge only the { ... } object into the array):
{
"id": "cursor",
"aliases": ["cursor_brain", "cursor-brain"],
"protocol": "open_ai_completions",
"default_base_url": "http://127.0.0.1:3001/v1",
"base_url_env": "CURSOR_BRAIN_BASE_URL",
"base_url_required": false,
"api_key_required": false,
"model_env": "CURSOR_BRAIN_MODEL",
"default_model": "auto",
"description": "Cursor Agent via ironclaw-cursor-brain (local OpenAI-compatible proxy)",
"setup": {
"kind": "open_ai_compatible",
"secret_name": "llm_cursor_brain_api_key",
"display_name": "Cursor Brain",
"can_list_models": true
}
}- Install: Start this service first (
ironclaw-cursor-brainif installed via crates.io, orcargo run/ release binary from source); it listens onhttp://127.0.0.1:3001by default. - Configure: Run
ironclaw onboardand choose Cursor Brain in the LLM step; the wizard will ask for Base URL—press Enter to use the defaulthttp://127.0.0.1:3001/v1, or enter another host/port. No API key needed. - Use: Set
LLM_BACKEND=cursor(orcursor_brain). Optional env:CURSOR_BRAIN_BASE_URL,CURSOR_BRAIN_MODEL.
After that, Ironclaw uses this service like other OpenAiCompletions providers; default port 3001 follows the Ironclaw convention (Web Gateway 3000 + 1).
Send the configured session header (default X-Session-Id) with the same value on each request; the service keeps an "external session id → cursor session_id" mapping (in-process LRU, capacity configurable). The next request with the same session uses --resume to continue the conversation. If a resume returns no content, the mapping is cleared and one retry without resume is done.
The mapping is persisted to ~/.ironclaw/cursor-brain-sessions.json (temp file + rename on each write). Keep that file across restarts to preserve sessions.
| Endpoint | Description |
|---|---|
POST /v1/chat/completions |
OpenAI-style body; supports stream: true (SSE) and stream: false |
GET /v1/models |
Model list (see below) |
GET /v1/health |
Health and cursor availability |
GET /v1/models: Returns the list of model ids this service supports. Ironclaw calls this when configuring the LLM and shows the list in the wizard so the user can select which model to use for Cursor. The list is queried from cursor-agent by running cursor-agent --list-models on each request; it is not configured or stored anywhere. If the agent is unavailable or the query times out (~15s), the default list ["auto", "cursor-default"] is returned.
temperature and max_tokens are parsed but not forwarded to cursor-agent (it uses its own defaults).
- License: LICENSE (MIT).
- Contributing: CONTRIBUTING.md.
- Technical reference: doc/TECHNICAL.md.
- Implementation reference: openclaw-cursor-brain (cursor-agent wrapping, stream-json).
- Integration with Ironclaw is via the provider registry only; no OpenClaw concepts.
