Skip to content

A production-ready TypeScript framework for building reliable, type-safe LLM applications with structured outputs, reasoning patterns, and intelligent tool integration.

License

Notifications You must be signed in to change notification settings

stormcoder/LLMTypeSafe

Β 
Β 

Repository files navigation

TS-DSPy: TypeScript-First LLM Framework

npm version TypeScript License: MIT Node.js

A production-ready TypeScript framework for building reliable, type-safe LLM applications with structured outputs, reasoning patterns, and intelligent tool integration.

Quick Start β€’ Examples β€’ API Reference β€’ Contributing


🌟 Why TS-DSPy?

TS-DSPy brings the powerful paradigms of Stanford's DSPy framework to the TypeScript ecosystem with full type safety, modern tooling, and production-grade features. Whether you're building AI chatbots, data processing pipelines, or intelligent agents, TS-DSPy provides the abstractions you need.

✨ Key Features

  • πŸ”’ Type-Safe Signatures: Define input/output schemas with automatic validation and TypeScript inference
  • 🧠 ReAct Pattern: Built-in Reasoning and Acting with intelligent tool integration
  • πŸ› οΈ Enhanced Tool Descriptions: Provide detailed tool descriptions for better AI decision-making
  • πŸ”Œ Multiple LLM Support: Supports OpenAI and Google Gemini with an extensible architecture for other providers
  • ⚑ Automatic Parsing: Converts raw LLM outputs to structured TypeScript objects
  • πŸ›‘οΈ Robust Error Handling: Comprehensive validation with automatic retries and fallbacks
  • πŸ“Š Usage Tracking: Built-in token usage and cost monitoring
  • 🎯 Zero Config: Works out of the box with minimal setup
  • πŸ“¦ Modular Design: Pluggable architecture for LLM providers and custom modules

πŸ“¦ Installation

# Core framework
npm install @ts-dspy/core

# OpenAI integration
npm install @ts-dspy/openai

# Gemini integration
npm install @ts-dspy/gemini

Requirements:

  • Node.js 18+
  • TypeScript 5.0+

πŸš€ Quick Start

⚠️ Important: Use ts-node to run TypeScript files directly. Transpiling to JavaScript may cause issues with decorators and type information.

# Run examples with ts-node
npx ts-node examples/basic-usage.ts
npx ts-node examples/basic-gemini-example.ts

# Or install globally  
npm install -g ts-node
ts-node your-script.ts

Look at ./examples/basic-usage.ts and ./examples/basic-gemini-example.ts to test this package as well as the examples below

Basic Prediction

import { configure, Predict } from '@ts-dspy/core';
import { OpenAILM } from '@ts-dspy/openai';
// or import { GeminiLM } from '@ts-dspy/gemini';

// Configure your LLM provider
configure({
    lm: new OpenAILM({ 
        apiKey: process.env.OPENAI_API_KEY,
        model: 'gpt-4'
    })
    // or 
    // lm: new GeminiLM({
    //     apiKey: process.env.GEMINI_API_KEY,
    //     model: 'gemini-2.0-flash'
    // })
});

// Simple question-answering
const qa = new Predict("question -> answer");
const result = await qa.forward({ 
    question: "What is the capital of France?" 
});

console.log(result.answer); // "Paris"

Type-Safe Signatures

import { Signature, InputField, OutputField } from '@ts-dspy/core';

class SentimentAnalysis extends Signature {
    @InputField({ description: "Text to analyze for sentiment" })
    text!: string;

    @OutputField({ description: "Sentiment classification" })
    sentiment!: 'positive' | 'negative' | 'neutral';

    @OutputField({ description: "Confidence score between 0 and 1" })
    confidence!: number;

    static description = "Analyze the sentiment of the given text";
}

const classifier = new Predict(SentimentAnalysis);
const result = await classifier.forward({ 
    text: "I love this framework!" 
});

// Full TypeScript autocomplete and type safety
console.log(result.sentiment);   // Type: 'positive' | 'negative' | 'neutral'
console.log(result.confidence);  // Type: number

🎯 Core Concepts

1. Signatures: Define Your Interface

Signatures are the foundation of TS-DSPy, defining the input/output structure for your LLM interactions.

String Signatures (Quick & Simple)

// Basic signature
const qa = new Predict("question -> answer");

// Multi-output with types
const analyzer = new Predict("text -> sentiment: string, score: float, summary");

Class-Based Signatures (Type-Safe & Structured)

class DataExtraction extends Signature {
    @InputField({ description: "Raw text to extract data from" })
    text!: string;

    @OutputField({ description: "Extracted person names" })
    names!: string[];

    @OutputField({ description: "Extracted dates in ISO format" })
    dates!: string[];

    @OutputField({ description: "Confidence level" })
    confidence!: number;
}

2. Modules: Pre-Built Reasoning Patterns

Predict: Basic LLM Prediction

const predictor = new Predict("context, question -> answer");
const result = await predictor.forward({
    context: "The sky is blue because of Rayleigh scattering.",
    question: "Why is the sky blue?"
});

ChainOfThought: Step-by-Step Reasoning

const reasoner = new ChainOfThought("problem -> solution: int");
const result = await reasoner.forward({
    problem: "If I have 3 apples and buy 5 more, then eat 2, how many do I have?"
});

console.log(result.reasoning); // "First I have 3 apples..."
console.log(result.solution);  // 6

RespAct: Tool-Using Agents

const agent = new RespAct("question -> answer", {
    tools: {
        calculate: {
            description: "Performs mathematical calculations",
            function: (expr: string) => eval(expr)
        },
        search: {
            description: "Searches for information online",
            function: async (query: string) => await searchWeb(query)
        }
    },
    maxSteps: 5
});

πŸ› οΈ Enhanced Tool Integration

TS-DSPy features an advanced tool system with intelligent descriptions that help LLMs make better decisions about when and how to use tools.

Enhanced Tool Format

const financialAgent = new RespAct("question -> answer", {
    tools: {
        fetchStockPrice: {
            description: "Retrieves current stock price for a ticker symbol (e.g., AAPL, GOOGL). Returns price in USD. Use when you need current market data.",
            function: async (symbol: string) => {
                const response = await fetch(`/api/stocks/${symbol}`);
                return response.json();
            }
        },
        
        calculatePortfolioValue: {
            description: "Calculates total portfolio value given holdings. Takes array of {symbol, shares} objects. Use for portfolio analysis.",
            function: (holdings: Array<{symbol: string, shares: number}>) => {
                return holdings.reduce((total, holding) => 
                    total + (getStockPrice(holding.symbol) * holding.shares), 0
                );
            }
        },

        convertCurrency: {
            description: "Converts amounts between currencies using live rates. Params: amount (number), from (currency code), to (currency code).",
            function: (amount: number, from: string, to: string) => {
                return convertCurrency(amount, from, to);
            }
        }
    }
});

// The LLM now has context about when and how to use each tool
const result = await financialAgent.forward({
    question: "I own 100 AAPL shares and 50 TSLA shares. What's my portfolio worth in EUR?"
});

Benefits of Tool Descriptions

  • 🎯 Better Tool Selection: LLMs make more intelligent decisions about which tools to use
  • πŸ“ Self-Documenting: Your code becomes more readable and maintainable
  • πŸ”§ Improved Parameters: Descriptions guide proper parameter formatting
  • ⚑ Reduced Errors: Clear descriptions prevent tool misuse
  • πŸ”„ Backward Compatible: Legacy function-only tools still work

πŸ“Š Advanced Features

Usage Tracking & Cost Monitoring

const lm = new OpenAILM({ apiKey: process.env.OPENAI_API_KEY });
const module = new Predict("question -> answer", lm);

await module.forward({ question: "Hello world!" });

// Get detailed usage statistics
const usage = lm.getUsage();
console.log(`Tokens: ${usage.totalTokens}`);
console.log(`Cost: $${usage.totalCost}`);
console.log(`Requests: ${usage.requestCount}`);

Configuration & Tracing

import { configure, getTraceHistory } from '@ts-dspy/core';

configure({
    lm: new OpenAILM({ apiKey: process.env.OPENAI_API_KEY }),
    cache: true,        // Enable response caching
    tracing: true,      // Record all interactions
    maxRetries: 3,      // Auto-retry failed requests
    timeout: 30000      // Request timeout in ms
});

// After running modules, analyze performance
const traces = getTraceHistory();
traces.forEach(trace => {
    console.log(`Module: ${trace.moduleId}`);
    console.log(`Duration: ${trace.duration}ms`);
    console.log(`Tokens: ${trace.usage.totalTokens}`);
    console.log(`Cost: $${trace.usage.totalCost}`);
});

Multiple LLM Providers

import { OpenAILM } from '@ts-dspy/openai';
import { GeminiLM } from '@ts-dspy/gemini';

const fastLM = new GeminiLM({ 
    apiKey: process.env.GEMINI_API_KEY,
    model: 'gemini-2.0-flash'  // Fast and cost-effective
});

const smartLM = new OpenAILM({ 
    apiKey: process.env.OPENAI_API_KEY,
    model: 'gpt-4'  // Powerful for complex reasoning
});

// Use different LLMs for different tasks
const quickQA = new Predict("question -> answer", fastLM);
const complexReasoner = new ChainOfThought("problem -> solution", smartLM);

πŸ“š Examples

1. Content Analysis Pipeline

class ContentAnalysis extends Signature {
    @InputField({ description: "Article text to analyze" })
    article!: string;

    @OutputField({ description: "Main topics covered" })
    topics!: string[];

    @OutputField({ description: "Article sentiment" })
    sentiment!: 'positive' | 'negative' | 'neutral';

    @OutputField({ description: "Reading difficulty (1-10)" })
    difficulty!: number;

    @OutputField({ description: "Key takeaways" })
    takeaways!: string[];
}

const analyzer = new Predict(ContentAnalysis);
const result = await analyzer.forward({
    article: "Your article text here..."
});

2. Research Assistant Agent

const researcher = new RespAct("research_query -> comprehensive_answer", {
    tools: {
        searchAcademic: {
            description: "Searches academic papers and journals. Use for scholarly research and citations.",
            function: async (query: string) => await searchScholar(query)
        },
        
        searchWeb: {
            description: "Searches general web content. Use for current events and general information.",
            function: async (query: string) => await searchGoogle(query)
        },
        
        summarizeText: {
            description: "Summarizes long text into key points. Use when you have lengthy content to process.",
            function: (text: string) => summarizeContent(text)
        }
    },
    maxSteps: 8
});

const result = await researcher.forward({
    research_query: "What are the latest developments in quantum computing algorithms?"
});

3. Data Processing Chain

// Multi-step data processing with type safety
class DataProcessor extends Signature {
    @InputField({ description: "Raw CSV data string" })
    csvData!: string;

    @OutputField({ description: "Processed and cleaned data" })
    cleanedData!: Array<Record<string, any>>;

    @OutputField({ description: "Data quality issues found" })
    issues!: string[];

    @OutputField({ description: "Suggested improvements" })
    improvements!: string[];
}

const processor = new ChainOfThought(DataProcessor);
const result = await processor.forward({
    csvData: "name,age,email\nJohn,25,[email protected]\n..."
});

πŸ—οΈ Architecture

TS-DSPy follows a clean, modular architecture:

@ts-dspy/
β”œβ”€β”€ core/                   # Core framework
β”‚   β”œβ”€β”€ types/             # TypeScript interfaces & types
β”‚   β”œβ”€β”€ core/              # Base classes (Signature, Module, Prediction)
β”‚   β”œβ”€β”€ modules/           # Built-in modules (Predict, ChainOfThought, RespAct)
β”‚   └── utils/             # Utilities (parsing, caching, validation)
β”‚
β”œβ”€β”€ openai/                # OpenAI integration
β”‚   β”œβ”€β”€ models/            # OpenAI language model implementation
β”‚   └── utils/             # OpenAI-specific utilities
β”‚
β”œβ”€β”€ gemini/                # Google Gemini integration
β”‚   β”œβ”€β”€ models/            # Gemini language model implementation
β”‚   └── utils/             # Gemini-specific utilities
β”‚
└── [future providers]/    # Anthropic, Cohere, etc. (coming soon)

Core Classes

  • Signature: Abstract base for defining input/output schemas
  • Module: Abstract base for all LLM modules
  • Prediction: Type-safe container for module outputs
  • Example: Container for training/evaluation examples

Decorators

  • @InputField(config): Mark class properties as input fields
  • @OutputField(config): Mark class properties as output fields

πŸ§ͺ Testing

Run the comprehensive test suite:

# Run all tests
npm test

# Run with coverage
npm run test:coverage

# Test specific package
cd packages/core && npm test

# Run examples (use ts-node for proper execution)
npm run run:example:openai
npm run run:example:gemini

# Or run directly with ts-node
npx ts-node examples/basic-usage.ts
npx ts-node examples/basic-gemini-example.ts

πŸ”§ Development

Setting Up Development Environment

# Clone the repository
git clone https://github.com/your-username/ts-dspy.git
cd ts-dspy

# Install dependencies
npm install

# Build all packages
npm run build

# Run tests
npm test

# Start development mode
npm run dev

Creating Custom LLM Providers

Implement the ILanguageModel interface:

import { ILanguageModel, LLMCallOptions, ChatMessage } from '@ts-dspy/core';

export class CustomLM implements ILanguageModel {
    async generate(prompt: string, options?: LLMCallOptions): Promise<string> {
        // Your implementation
    }
    
    async chat(messages: ChatMessage[], options?: LLMCallOptions): Promise<string> {
        // Your implementation  
    }
    
    getUsage() {
        // Return usage statistics
    }
}

Contributing Custom Modules

Extend the Module base class:

import { Module, Signature, Prediction } from '@ts-dspy/core';

export class CustomModule extends Module {
    constructor(signature: string | typeof Signature, lm?: ILanguageModel) {
        super(signature, lm);
    }

    async forward(inputs: Record<string, any>): Promise<Prediction> {
        // Your module implementation
    }
}

πŸ“‹ API Reference

Configuration

// Global configuration
configure({
    lm: new OpenAILM({ apiKey: 'your-key' }),
    cache: boolean,
    tracing: boolean,
    maxRetries: number,
    timeout: number
});

// Get current configuration
const config = getDefaultLM();

Core Modules

// Basic prediction
const predict = new Predict("input -> output");

// Reasoning with chain of thought
const reasoner = new ChainOfThought("problem -> solution");

// Tool-using agent
const agent = new RespAct("question -> answer", { 
    tools: { /* your tools */ },
    maxSteps: 5 
});

Utilities

// Manual prompt building
const prompt = buildPrompt(signature, inputs, examples);

// Parse LLM output
const parsed = parseOutput(rawOutput, signature);

πŸ“¦ Publishing Information

This package is published to NPM as a scoped monorepo:

  • Core Package: @ts-dspy/core
  • OpenAI Integration: @ts-dspy/openai
  • Google Gemini Integration: @ts-dspy/gemini
  • License: MIT
  • Author: Arnav Dadarya
  • Node.js: 18+ required
  • TypeScript: 5.0+ required
  • ⚠️ Execution: Use ts-node instead of transpiling to JavaScript

Installation

# For most users (core + OpenAI)
npm install @ts-dspy/core @ts-dspy/openai

# With Gemini support
npm install @ts-dspy/core @ts-dspy/gemini

# All providers
npm install @ts-dspy/core @ts-dspy/openai @ts-dspy/gemini

# Just the core framework
npm install @ts-dspy/core

# Install ts-node for proper execution
npm install -g ts-node

# Specific version
npm install @ts-dspy/core@^0.1.0

🀝 Contributing

We welcome contributions! Here's how to get started:

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature/amazing-feature
  3. Make your changes with tests
  4. Run the test suite: npm test
  5. Submit a pull request

Contribution Guidelines

  • Follow TypeScript best practices
  • Add tests for new features
  • Update documentation as needed
  • Ensure all CI checks pass
  • Follow conventional commit format

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

Copyright (c) 2024 Arnav Dadarya


πŸ™ Acknowledgments

  • Inspired by Stanford's DSPy framework
  • Built with ❀️ for the TypeScript community
  • Thanks to all contributors and early adopters

πŸ”— Links & Resources


Star ⭐ the repo if TS-DSPy helped you build better LLM applications!

Made with TypeScript β€’ Powered by AI β€’ Built for Developers

About

A production-ready TypeScript framework for building reliable, type-safe LLM applications with structured outputs, reasoning patterns, and intelligent tool integration.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • TypeScript 74.3%
  • JavaScript 25.7%