Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
58 changes: 58 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@ That's it. Start chatting. Your agent remembers everything.

```bash
nuum -p "What files are in src/" # Single prompt
nuum --mcp # Run as an MCP server
nuum --inspect # View memory stats
nuum --db ./project.db --repl # Custom database
```
Expand Down Expand Up @@ -75,6 +76,63 @@ MCP tools appear alongside built-in tools. The agent discovers and uses them aut

---

## MCP Server Mode

The `--mcp` flag starts nuum as an MCP server, allowing other tools (Claude Code, Codex, etc.) to interact with persistent nuum instances. Each instance gets its own SQLite database with full persistent memory.

Agent databases live in `.nuum/agents/<name>.db` relative to the working directory.

### Setup

```bash
# Add to Claude Code
claude mcp add nuum -- nuum --mcp

# Or from source during development
claude mcp add nuum -- bun run /path/to/nuum/dist/index.js --mcp
```

For other MCP clients:

```json
{
"mcpServers": {
"nuum": {
"command": "nuum",
"args": ["--mcp"]
}
}
}
```

### Tools

| Tool | Description |
|------|-------------|
| `list_agents` | List all agents with name, mission, status, and timestamps |
| `create_agent` | Create a new agent with optional system prompt |
| `send_message` | Send a message to an agent (with optional `create_if_missing`) |

Each agent is a persistent conversation — just call `send_message` with the same agent name to continue where you left off.

### Example

```
# Create a specialized agent
create_agent(name: "reviewer", system_prompt: "You are a code review specialist")

# Send it a prompt
send_message(agent: "reviewer", prompt: "Review this function for bugs: ...")

# Continue the conversation (agent remembers everything)
send_message(agent: "reviewer", prompt: "What about error handling?")

# Or create-on-first-use
send_message(agent: "helper", prompt: "Hello!", create_if_missing: true)
```

---

## Embedding in Applications

Nuum is designed to be **embedded**. While it runs standalone, its primary use case is integration into host applications, IDEs, and orchestration platforms.
Expand Down
17 changes: 17 additions & 0 deletions src/cli/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@ interface CliOptions {
dump: boolean
compact: boolean
stdio: boolean
mcp: boolean
repl: boolean
}

Expand All @@ -44,6 +45,7 @@ function parseCliArgs(): CliOptions {
dump: {type: 'boolean', default: false},
compact: {type: 'boolean', default: false},
stdio: {type: 'boolean', default: false},
mcp: {type: 'boolean', default: false},
repl: {type: 'boolean', default: false},
},
allowPositionals: false,
Expand All @@ -60,6 +62,7 @@ function parseCliArgs(): CliOptions {
dump: values.dump ?? false,
compact: values.compact ?? false,
stdio: values.stdio ?? false,
mcp: values.mcp ?? false,
repl: values.repl ?? false,
}
}
Expand All @@ -75,6 +78,7 @@ Usage:
nuum -p "prompt" --verbose Show debug output
nuum --repl Start interactive REPL mode
nuum --stdio Start protocol server over stdin/stdout
nuum --mcp Start MCP server over stdin/stdout
nuum --inspect Show memory stats (no LLM call)
nuum --dump Show raw system prompt (no LLM call)
nuum --compact Force run compaction (distillation)
Expand All @@ -85,6 +89,7 @@ Options:
-v, --verbose Show memory state, token usage, and execution trace
--repl Start interactive REPL with readline support
--stdio Start Claude Code SDK protocol server on stdin/stdout
--mcp Start as an MCP server
--inspect Show memory statistics: temporal, present, LTM
--dump Dump the full system prompt that would be sent to LLM
--compact Force run compaction to reduce effective view size
Expand Down Expand Up @@ -168,6 +173,18 @@ async function main(): Promise<void> {
return
}

// Handle --mcp (MCP server mode)
if (options.mcp) {
try {
const {runMcpServer} = await import('../mcp-server')
await runMcpServer()
} catch (error) {
printError(error, {verbose: options.verbose})
process.exit(1)
}
return
}

// Handle --inspect (no LLM call)
if (options.inspect) {
try {
Expand Down
Loading