A production-ready TypeScript framework for building reliable, type-safe LLM applications with structured outputs, reasoning patterns, and intelligent tool integration.
Quick Start β’ Examples β’ API Reference β’ Contributing
TS-DSPy brings the powerful paradigms of Stanford's DSPy framework to the TypeScript ecosystem with full type safety, modern tooling, and production-grade features. Whether you're building AI chatbots, data processing pipelines, or intelligent agents, TS-DSPy provides the abstractions you need.
- π Type-Safe Signatures: Define input/output schemas with automatic validation and TypeScript inference
- π§ ReAct Pattern: Built-in Reasoning and Acting with intelligent tool integration
- π οΈ Enhanced Tool Descriptions: Provide detailed tool descriptions for better AI decision-making
- π Multiple LLM Support: Supports OpenAI and Google Gemini with an extensible architecture for other providers
- β‘ Automatic Parsing: Converts raw LLM outputs to structured TypeScript objects
- π‘οΈ Robust Error Handling: Comprehensive validation with automatic retries and fallbacks
- π Usage Tracking: Built-in token usage and cost monitoring
- π― Zero Config: Works out of the box with minimal setup
- π¦ Modular Design: Pluggable architecture for LLM providers and custom modules
# Core framework
npm install @ts-dspy/core
# OpenAI integration
npm install @ts-dspy/openai
# Gemini integration
npm install @ts-dspy/gemini
Requirements:
- Node.js 18+
- TypeScript 5.0+
ts-node
to run TypeScript files directly. Transpiling to JavaScript may cause issues with decorators and type information.
# Run examples with ts-node
npx ts-node examples/basic-usage.ts
npx ts-node examples/basic-gemini-example.ts
# Or install globally
npm install -g ts-node
ts-node your-script.ts
Look at ./examples/basic-usage.ts
and ./examples/basic-gemini-example.ts
to test this package as well as the examples below
import { configure, Predict } from '@ts-dspy/core';
import { OpenAILM } from '@ts-dspy/openai';
// or import { GeminiLM } from '@ts-dspy/gemini';
// Configure your LLM provider
configure({
lm: new OpenAILM({
apiKey: process.env.OPENAI_API_KEY,
model: 'gpt-4'
})
// or
// lm: new GeminiLM({
// apiKey: process.env.GEMINI_API_KEY,
// model: 'gemini-2.0-flash'
// })
});
// Simple question-answering
const qa = new Predict("question -> answer");
const result = await qa.forward({
question: "What is the capital of France?"
});
console.log(result.answer); // "Paris"
import { Signature, InputField, OutputField } from '@ts-dspy/core';
class SentimentAnalysis extends Signature {
@InputField({ description: "Text to analyze for sentiment" })
text!: string;
@OutputField({ description: "Sentiment classification" })
sentiment!: 'positive' | 'negative' | 'neutral';
@OutputField({ description: "Confidence score between 0 and 1" })
confidence!: number;
static description = "Analyze the sentiment of the given text";
}
const classifier = new Predict(SentimentAnalysis);
const result = await classifier.forward({
text: "I love this framework!"
});
// Full TypeScript autocomplete and type safety
console.log(result.sentiment); // Type: 'positive' | 'negative' | 'neutral'
console.log(result.confidence); // Type: number
Signatures are the foundation of TS-DSPy, defining the input/output structure for your LLM interactions.
// Basic signature
const qa = new Predict("question -> answer");
// Multi-output with types
const analyzer = new Predict("text -> sentiment: string, score: float, summary");
class DataExtraction extends Signature {
@InputField({ description: "Raw text to extract data from" })
text!: string;
@OutputField({ description: "Extracted person names" })
names!: string[];
@OutputField({ description: "Extracted dates in ISO format" })
dates!: string[];
@OutputField({ description: "Confidence level" })
confidence!: number;
}
const predictor = new Predict("context, question -> answer");
const result = await predictor.forward({
context: "The sky is blue because of Rayleigh scattering.",
question: "Why is the sky blue?"
});
const reasoner = new ChainOfThought("problem -> solution: int");
const result = await reasoner.forward({
problem: "If I have 3 apples and buy 5 more, then eat 2, how many do I have?"
});
console.log(result.reasoning); // "First I have 3 apples..."
console.log(result.solution); // 6
const agent = new RespAct("question -> answer", {
tools: {
calculate: {
description: "Performs mathematical calculations",
function: (expr: string) => eval(expr)
},
search: {
description: "Searches for information online",
function: async (query: string) => await searchWeb(query)
}
},
maxSteps: 5
});
TS-DSPy features an advanced tool system with intelligent descriptions that help LLMs make better decisions about when and how to use tools.
const financialAgent = new RespAct("question -> answer", {
tools: {
fetchStockPrice: {
description: "Retrieves current stock price for a ticker symbol (e.g., AAPL, GOOGL). Returns price in USD. Use when you need current market data.",
function: async (symbol: string) => {
const response = await fetch(`/api/stocks/${symbol}`);
return response.json();
}
},
calculatePortfolioValue: {
description: "Calculates total portfolio value given holdings. Takes array of {symbol, shares} objects. Use for portfolio analysis.",
function: (holdings: Array<{symbol: string, shares: number}>) => {
return holdings.reduce((total, holding) =>
total + (getStockPrice(holding.symbol) * holding.shares), 0
);
}
},
convertCurrency: {
description: "Converts amounts between currencies using live rates. Params: amount (number), from (currency code), to (currency code).",
function: (amount: number, from: string, to: string) => {
return convertCurrency(amount, from, to);
}
}
}
});
// The LLM now has context about when and how to use each tool
const result = await financialAgent.forward({
question: "I own 100 AAPL shares and 50 TSLA shares. What's my portfolio worth in EUR?"
});
- π― Better Tool Selection: LLMs make more intelligent decisions about which tools to use
- π Self-Documenting: Your code becomes more readable and maintainable
- π§ Improved Parameters: Descriptions guide proper parameter formatting
- β‘ Reduced Errors: Clear descriptions prevent tool misuse
- π Backward Compatible: Legacy function-only tools still work
const lm = new OpenAILM({ apiKey: process.env.OPENAI_API_KEY });
const module = new Predict("question -> answer", lm);
await module.forward({ question: "Hello world!" });
// Get detailed usage statistics
const usage = lm.getUsage();
console.log(`Tokens: ${usage.totalTokens}`);
console.log(`Cost: $${usage.totalCost}`);
console.log(`Requests: ${usage.requestCount}`);
import { configure, getTraceHistory } from '@ts-dspy/core';
configure({
lm: new OpenAILM({ apiKey: process.env.OPENAI_API_KEY }),
cache: true, // Enable response caching
tracing: true, // Record all interactions
maxRetries: 3, // Auto-retry failed requests
timeout: 30000 // Request timeout in ms
});
// After running modules, analyze performance
const traces = getTraceHistory();
traces.forEach(trace => {
console.log(`Module: ${trace.moduleId}`);
console.log(`Duration: ${trace.duration}ms`);
console.log(`Tokens: ${trace.usage.totalTokens}`);
console.log(`Cost: $${trace.usage.totalCost}`);
});
import { OpenAILM } from '@ts-dspy/openai';
import { GeminiLM } from '@ts-dspy/gemini';
const fastLM = new GeminiLM({
apiKey: process.env.GEMINI_API_KEY,
model: 'gemini-2.0-flash' // Fast and cost-effective
});
const smartLM = new OpenAILM({
apiKey: process.env.OPENAI_API_KEY,
model: 'gpt-4' // Powerful for complex reasoning
});
// Use different LLMs for different tasks
const quickQA = new Predict("question -> answer", fastLM);
const complexReasoner = new ChainOfThought("problem -> solution", smartLM);
class ContentAnalysis extends Signature {
@InputField({ description: "Article text to analyze" })
article!: string;
@OutputField({ description: "Main topics covered" })
topics!: string[];
@OutputField({ description: "Article sentiment" })
sentiment!: 'positive' | 'negative' | 'neutral';
@OutputField({ description: "Reading difficulty (1-10)" })
difficulty!: number;
@OutputField({ description: "Key takeaways" })
takeaways!: string[];
}
const analyzer = new Predict(ContentAnalysis);
const result = await analyzer.forward({
article: "Your article text here..."
});
const researcher = new RespAct("research_query -> comprehensive_answer", {
tools: {
searchAcademic: {
description: "Searches academic papers and journals. Use for scholarly research and citations.",
function: async (query: string) => await searchScholar(query)
},
searchWeb: {
description: "Searches general web content. Use for current events and general information.",
function: async (query: string) => await searchGoogle(query)
},
summarizeText: {
description: "Summarizes long text into key points. Use when you have lengthy content to process.",
function: (text: string) => summarizeContent(text)
}
},
maxSteps: 8
});
const result = await researcher.forward({
research_query: "What are the latest developments in quantum computing algorithms?"
});
// Multi-step data processing with type safety
class DataProcessor extends Signature {
@InputField({ description: "Raw CSV data string" })
csvData!: string;
@OutputField({ description: "Processed and cleaned data" })
cleanedData!: Array<Record<string, any>>;
@OutputField({ description: "Data quality issues found" })
issues!: string[];
@OutputField({ description: "Suggested improvements" })
improvements!: string[];
}
const processor = new ChainOfThought(DataProcessor);
const result = await processor.forward({
csvData: "name,age,email\nJohn,25,[email protected]\n..."
});
TS-DSPy follows a clean, modular architecture:
@ts-dspy/
βββ core/ # Core framework
β βββ types/ # TypeScript interfaces & types
β βββ core/ # Base classes (Signature, Module, Prediction)
β βββ modules/ # Built-in modules (Predict, ChainOfThought, RespAct)
β βββ utils/ # Utilities (parsing, caching, validation)
β
βββ openai/ # OpenAI integration
β βββ models/ # OpenAI language model implementation
β βββ utils/ # OpenAI-specific utilities
β
βββ gemini/ # Google Gemini integration
β βββ models/ # Gemini language model implementation
β βββ utils/ # Gemini-specific utilities
β
βββ [future providers]/ # Anthropic, Cohere, etc. (coming soon)
Signature
: Abstract base for defining input/output schemasModule
: Abstract base for all LLM modulesPrediction
: Type-safe container for module outputsExample
: Container for training/evaluation examples
@InputField(config)
: Mark class properties as input fields@OutputField(config)
: Mark class properties as output fields
Run the comprehensive test suite:
# Run all tests
npm test
# Run with coverage
npm run test:coverage
# Test specific package
cd packages/core && npm test
# Run examples (use ts-node for proper execution)
npm run run:example:openai
npm run run:example:gemini
# Or run directly with ts-node
npx ts-node examples/basic-usage.ts
npx ts-node examples/basic-gemini-example.ts
# Clone the repository
git clone https://github.com/your-username/ts-dspy.git
cd ts-dspy
# Install dependencies
npm install
# Build all packages
npm run build
# Run tests
npm test
# Start development mode
npm run dev
Implement the ILanguageModel
interface:
import { ILanguageModel, LLMCallOptions, ChatMessage } from '@ts-dspy/core';
export class CustomLM implements ILanguageModel {
async generate(prompt: string, options?: LLMCallOptions): Promise<string> {
// Your implementation
}
async chat(messages: ChatMessage[], options?: LLMCallOptions): Promise<string> {
// Your implementation
}
getUsage() {
// Return usage statistics
}
}
Extend the Module
base class:
import { Module, Signature, Prediction } from '@ts-dspy/core';
export class CustomModule extends Module {
constructor(signature: string | typeof Signature, lm?: ILanguageModel) {
super(signature, lm);
}
async forward(inputs: Record<string, any>): Promise<Prediction> {
// Your module implementation
}
}
// Global configuration
configure({
lm: new OpenAILM({ apiKey: 'your-key' }),
cache: boolean,
tracing: boolean,
maxRetries: number,
timeout: number
});
// Get current configuration
const config = getDefaultLM();
// Basic prediction
const predict = new Predict("input -> output");
// Reasoning with chain of thought
const reasoner = new ChainOfThought("problem -> solution");
// Tool-using agent
const agent = new RespAct("question -> answer", {
tools: { /* your tools */ },
maxSteps: 5
});
// Manual prompt building
const prompt = buildPrompt(signature, inputs, examples);
// Parse LLM output
const parsed = parseOutput(rawOutput, signature);
This package is published to NPM as a scoped monorepo:
- Core Package:
@ts-dspy/core
- OpenAI Integration:
@ts-dspy/openai
- Google Gemini Integration:
@ts-dspy/gemini
- License: MIT
- Author: Arnav Dadarya
- Node.js: 18+ required
- TypeScript: 5.0+ required
β οΈ Execution: Usets-node
instead of transpiling to JavaScript
# For most users (core + OpenAI)
npm install @ts-dspy/core @ts-dspy/openai
# With Gemini support
npm install @ts-dspy/core @ts-dspy/gemini
# All providers
npm install @ts-dspy/core @ts-dspy/openai @ts-dspy/gemini
# Just the core framework
npm install @ts-dspy/core
# Install ts-node for proper execution
npm install -g ts-node
# Specific version
npm install @ts-dspy/core@^0.1.0
We welcome contributions! Here's how to get started:
- Fork the repository
- Create a feature branch:
git checkout -b feature/amazing-feature
- Make your changes with tests
- Run the test suite:
npm test
- Submit a pull request
- Follow TypeScript best practices
- Add tests for new features
- Update documentation as needed
- Ensure all CI checks pass
- Follow conventional commit format
This project is licensed under the MIT License - see the LICENSE file for details.
Copyright (c) 2024 Arnav Dadarya
- Inspired by Stanford's DSPy framework
- Built with β€οΈ for the TypeScript community
- Thanks to all contributors and early adopters
- Examples - Comprehensive usage examples
- Issues - Bug reports and feature requests
- Discussions - Community discussions
- Changelog - Release notes
Star β the repo if TS-DSPy helped you build better LLM applications!
Made with TypeScript β’ Powered by AI β’ Built for Developers