The AI agent that doesn't ruin everything.
A TypeScript library for running chat sessions with LLM tool-calling models, featuring support for automatic subtasks, parallel execution, and a rich set of built-in tools.
# Clone the repository
git clone https://github.com/Foundation42/doom.git
cd doom
# Install dependencies
npm install
# Build the project
npm run build
# Install globally (optional)
npm link# Start the interactive DOOM experience
doom
# Or run a single command
doom "What's the weather in Tokyo?"
# Use parallel execution for faster results
doom --parallel "Compare the weather in London, New York and Tokyo"
# Hide tool execution visualization in interactive mode
doom --notool
# Show tool execution visualization in batch mode (hidden by default)
doom --showtool "What's the weather in Tokyo?"
# Force verbose output in batch mode
doom --verbose "What's the weather in Tokyo?"
# Start the modern React-based CLI (Claude Code-inspired UI)
npm run doom:react
# REPL commands
# /help - Show help message
# /tools - List available tools
# /clear - Clear conversation history
# /exit - Exit DOOM
# /tools on/off - Enable/disable tool visualization
# /parallel on/off - Enable/disable parallel executionDOOM now features a modern, component-based CLI built with React and Ink, inspired by Claude Code's interface:
- Component-based architecture with proper state management
- Multi-line input with cursor control
- Command history navigation with up/down arrows
- Tool execution visualization
- Status bar with mode indicators
- Tab completion for commands (coming soon)
To run the modern CLI interface:
npm run doom:react- β¨ Tool calling with OpenAI's API
- π Streaming responses
- π§© Subtasks for automatic task orchestration
- β‘ Parallel execution of subtasks
- π Retry logic for transient errors
- β±οΈ Timeout handling
- π Cancellation via AbortSignal
- π οΈ Error-resistant JSON parsing with schema validation
- π Customizable logging
- π Parent-child relationship tracking
- ποΈ Visual tool execution display in CLI
npm install
# or
yarn install
# or
pnpm installThe repository includes several examples to demonstrate DOOM's capabilities:
# Build the project
npm run build
# Run the interactive REPL
npm run repl
# Run a single command and exit
npm run run "What's the weather in Tokyo?"
# Run with parallel tool execution
npm run run "Compare the weather in Tokyo, London and New York" -- --parallel
# Run the subtasks example
npm run subtasks
# Run the parallel execution benchmark
npm run parallel
# Explore the standard tools library (requires OpenAI API key)
npm run tools
# Run basic tools demo locally (no API key needed)
npm run tools:local
# Run advanced tools demo locally (no API key needed)
npm run tools:advanced
# Run advanced AI tools demo (requires OpenAI API key)
npm run tools:aiThe examples will check for API keys in the following order:
- Environment variables:
OPENAI_API_KEY,ANTHROPIC_API_KEY,GOOGLE_API_KEY,MISTRAL_API_KEY, etc. - Home directory file:
~/.env(format:OPENAI_API_KEY=sk-..., one per line) - Manual input: If no key is found, some examples will prompt you to enter one
For the advanced AI tools demo, at least the OpenAI API key is required. Additional provider keys are optional but enable the multi-provider comparison features.
import { runChatWithTools } from 'doom';
// Define a tool
const weatherTool = {
name: 'getWeather',
description: 'Get weather for a location',
parameters: {
type: 'object',
properties: {
location: { type: 'string', description: 'City name' }
},
required: ['location']
},
func: async (args) => {
return `It's sunny in ${args.location}`;
}
};
// Initial chat history
const messages = [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'What\'s the weather in Paris?' }
];
// Run the chat with tools
const result = await runChatWithTools(messages, [weatherTool]);
console.log(result);The most powerful feature of DOOM is the ability to have tools automatically trigger other tools by returning subtasks:
const getUserProfileTool = {
name: 'getUserProfile',
description: 'Get user profile information',
parameters: {
type: 'object',
properties: {
userId: { type: 'string', description: 'User ID' }
},
required: ['userId']
},
func: async (args) => {
// Return both output and subtasks
return {
output: `Found user profile for ${args.userId}`,
subTasks: [
{
toolName: 'getOrderHistory',
args: { userId: args.userId, limit: 3 }
},
{
toolName: 'getRecommendations',
args: { userId: args.userId }
}
]
};
}
};The subtasks will be executed in order (or in parallel if configured), and their results will be included in the conversation context. Subtasks can also return their own subtasks, creating a flexible workflow tree.
You can significantly improve performance by enabling parallel execution of subtasks:
// Simple boolean flag for default parallel settings
const result = await runChatWithTools(messages, tools, { parallel: true });
// Or with detailed configuration
const result = await runChatWithTools(messages, tools, {
parallel: {
enabled: true, // Enable parallel execution
maxConcurrent: 4, // Maximum concurrent tasks
includeNested: true, // Run nested subtasks in parallel too
maxDepth: 2 // Maximum depth level for parallelism
}
});Parallel execution is especially useful when:
- You have multiple independent subtasks that don't depend on each other
- Your tools perform network requests or other I/O operations
- You need to process many subtasks quickly
The maxConcurrent setting helps control resource usage by limiting how many tasks run at once.
DOOM supports custom loggers for better integration with your application's logging system:
import { createConsoleLogger } from 'doom';
// Create a custom logger
const myLogger = {
debug: (message, ...args) => console.debug(`[DEBUG] ${message}`, ...args),
info: (message, ...args) => console.log(`[INFO] ${message}`, ...args),
warn: (message, ...args) => console.warn(`[WARNING] ${message}`, ...args),
error: (message, ...args) => console.error(`[ERROR] ${message}`, ...args)
};
// Or use the built-in helper
const consoleLogger = createConsoleLogger('MyApp:', 'info');
// Pass the logger in options
const result = await runChatWithTools(messages, tools, { logger: myLogger });const options = {
modelName: 'gpt-4o-mini', // OpenAI model to use
temperature: 0.7, // Model temperature
maxRetries: 3, // Max retries on transient errors
retryDelayMs: 1000, // Base delay before retry attempts
signal: abortController.signal, // AbortSignal for cancellation
timeoutMs: 30000, // Timeout in milliseconds
logger: customLogger, // Custom logger implementation
parallel: { // Parallel execution configuration
enabled: true, // Enable parallel execution
maxConcurrent: 4, // Maximum concurrent tasks
includeNested: true, // Run nested subtasks in parallel too
maxDepth: 2 // Maximum depth level for parallelism
}
};
const result = await runChatWithTools(messages, tools, options);DOOM includes a standard library of reusable tools that you can include in your projects:
import { createStandardTools } from 'doom';
// Get the entire standard tool library
const allTools = createStandardTools();
// Or import specific tool categories
import {
createHttpTools,
createDataTools,
createUtilityTools
} from 'doom/tools';
const httpTools = createHttpTools();
const dataTools = createDataTools();
const utilityTools = createUtilityTools();
// Run with the tools
const result = await runChatWithTools(messages, allTools);-
HTTP Tools
fetchUrl: Fetches content from a URL with support for various HTTP methodssearchWeb: Simulates web search and returns relevant resultsextractData: Extracts structured data from HTML or text content
-
Data Tools
parseJson: Validates and formats JSON data with comprehensive schema validationprocessCSV: Handles CSV parsing, transformation, multiple formats (CSV, JSON, ASCII tables), filtering and cell transformationsfilterData: Filters, sorts, and aggregates data with complex filtering criteriagenerateChart: Creates data visualizations (simulated)
-
Utility Tools
calculator: Performs mathematical calculations and unit conversionsdateTime: Handles date formatting, calculations, and manipulationstranslate: Simulates language translationstringUtils: Provides string manipulation operationscryptoHash: Performs hashing, encoding, and generation operations
-
File Tools
readFile: Reads content from files with encoding and limit optionswriteFile: Writes or appends content to fileslistFiles: Lists files in directories with filtering optionsfileInfo: Gets detailed information about files and directoriessearchFiles: Searches file contents for patterns and text
-
AI Tools
summarizeText: Creates concise summaries of longer textanalyzeSentiment: Determines sentiment and emotion in textextractKeywords: Extracts key topics and terms from textclassifyText: Categorizes text into topics or intentstranslateText: AI-powered text translation (enhanced version)
-
System Tools
systemInfo: Provides OS, CPU, memory, and disk informationprocessInfo: Shows details of running processesenvironment: Works with environment variablesnetworkInfo: Shows network interfaces and connectivityexecuteCommand: Runs safe system commands with security limits
-
LLM Tools (requires API keys)
multiProviderCompletion: Sends prompts to different LLM providers (OpenAI, Anthropic, Google, etc.)adaptiveCompletion: Automatically selects appropriate models based on task typestransformText: Specialized text transformations using task-specific modelscompareLLMResponses: Compares responses from multiple LLM providers for the same promptchainOfThought: Step-by-step reasoning for complex problem solving
-
TTS Tools (requires OpenAI API key)
speakText: Basic text-to-speech with voice selectionnarrateContent: Specialized narration styles for different content typesexpressiveSpeech: Character and emotion-based speech generation
The tools library also includes useful helper functions:
import { safeToolExecution, delay } from 'doom';
// Safely execute a function with error handling
const result = await safeToolExecution(
async () => {
// Your tool logic here
return { output: "Success!" };
},
(error) => `Custom error message: ${error.message}`
);
// Create a delay with abort signal support
await delay(1000, abortSignal);Try the npm run tools command to explore the standard library in an interactive demo.
MIT