Skip to content

ledutu2/rn-base-component-rag

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

2 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

React Native Components RAG System

A complete Retrieval-Augmented Generation (RAG) system built with TypeScript for React Native component documentation. This system provides intelligent search and question-answering capabilities over React Native component documentation using local models.

๐Ÿš€ Features

  • Interactive Chat Interface: Modern web UI with real-time streaming responses
  • MCP Server: Model Context Protocol server for AI assistant integration
  • Server-Sent Events (SSE): Live streaming of AI responses for better UX
  • Local RAG Pipeline: Complete RAG implementation using LangChain.js
  • Vector Search: LanceDB for efficient similarity search
  • Local Embeddings: @xenova/transformers for text embeddings
  • Local LLM: Llama3 via Ollama for answer generation
  • REST API: Express.js with comprehensive endpoints
  • Context Display: Visual representation of retrieved documents
  • Message History: Persistent chat history with localStorage
  • Responsive Design: Works seamlessly on desktop and mobile
  • Swagger Documentation: Interactive API documentation
  • TypeScript: Full type safety and modern development experience
  • Modular Architecture: Clean, extensible codebase

๐Ÿ“‹ Prerequisites

  • Node.js >= 20.0.0
  • Ollama installed and running locally
  • Llama3 model pulled in Ollama

Installing Ollama and Llama3

# Install Ollama (macOS)
brew install ollama

# Start Ollama service
ollama serve

# Pull Llama3 model
ollama pull llama3

๐Ÿ› ๏ธ Installation

  1. Clone and install dependencies:
cd rn-base-component-rag
npm install
  1. Configure environment:
cp .env.example .env
# Edit .env file with your preferences
  1. Start the server:
npm start

The server will automatically:

  • Initialize the embedding model
  • Load and index all documentation from ./docs
  • Start the API server on port 3000 (configurable)
  • Serve the interactive chat interface at http://localhost:3000

๐Ÿ”ง Configuration

Configure the system via environment variables in .env:

# Server Configuration
PORT=3000
NODE_ENV=development

# Model Configuration
MODEL=llama3
OLLAMA_BASE_URL=http://localhost:11434

# Embedding Model Configuration
EMBEDDING_MODEL=Xenova/bge-base-en-v1.5

# Vector Database Configuration
LANCEDB_PATH=./data/lancedb
VECTOR_DIMENSION=384

# RAG Configuration
TOP_K_RESULTS=5
CHUNK_SIZE=1000
CHUNK_OVERLAP=200

๐Ÿ’ฌ Chat Interface

Access the interactive chat interface at http://localhost:3000

Features:

  • Real-time Streaming: Responses stream in real-time using Server-Sent Events
  • Context Sidebar: View retrieved documents that inform each response
  • Message History: Persistent chat history saved locally
  • Component Tags: Quick access to all available React Native components
  • Example Questions: Pre-built queries to get started quickly
  • Responsive Design: Works on desktop, tablet, and mobile devices
  • Configurable Settings: Adjust streaming mode and context document count

Usage:

  1. Open http://localhost:3000 in your browser
  2. Type your question about React Native components
  3. Watch the AI respond in real-time with streaming text
  4. View the context documents used to generate the response
  5. Continue the conversation with follow-up questions

๐Ÿค– MCP Server (AI Assistant Integration)

The system includes a Model Context Protocol (MCP) server that allows AI assistants like Claude or Cursor to directly access your React Native documentation.

Starting the MCP Server

Development mode (with TypeScript compilation):

npm run mcp

Production mode (using compiled JavaScript):

npm run build  # First build the project
npm run mcp:prod

Configuration for AI Assistants

Add this to your MCP client configuration (e.g., Cursor settings):

For Development:

{
  "mcpServers": {
    "rn-base-component": {
      "command": "npm",
      "args": ["run", "mcp"],
      "cwd": "/path/to/rn-base-component-rag"
    }
  }
}

For Production (recommended):

{
  "mcpServers": {
    "rn-base-component": {
      "command": "npm",
      "args": ["run", "mcp:prod"],
      "cwd": "/path/to/rn-base-component-rag"
    }
  }
}

Available MCP Tools

  • retrieve_context: Search documentation with natural language queries

    await callTool('retrieve_context', {
      question: 'How to customize Button styling and handle press events?',
      limit: 5
    });
  • search_by_metadata: Filter documentation by component name or metadata

    await callTool('search_by_metadata', {
      filters: { component: 'Button' },
      limit: 10
    });
  • get_stats: Get system statistics and configuration

    await callTool('get_stats', {});

Available MCP Resources

  • rn-component://<ComponentName>: Access complete documentation for specific components
  • rn-components://overview: Get an overview of all available components
// Access complete Button documentation
const buttonDocs = await readResource('rn-component://Button');

// Get system overview
const overview = await readResource('rn-components://overview');

Benefits of MCP Integration

  • Direct AI Access: AI assistants can query your documentation without manual copy-pasting
  • Context-Aware Responses: AI gets relevant, up-to-date information about your components
  • Standardized Interface: Uses MCP protocol for consistent integration across different AI tools
  • Real-time Updates: Always accesses the latest indexed documentation

๐Ÿ“š API Endpoints

Chat Endpoints

POST /api/chat/stream - Streaming chat with Server-Sent Events

curl -X POST http://localhost:3000/api/chat/stream \
  -H "Content-Type: application/json" \
  -d '{"query": "How to implement form validation?"}'

POST /api/chat/message - Regular chat response

curl -X POST http://localhost:3000/api/chat/message \
  -H "Content-Type: application/json" \
  -d '{"query": "What is Button component?"}'

Retrieval Endpoints

POST /api/retrieve

curl -X POST http://localhost:3000/api/retrieve \
  -H "Content-Type: application/json" \
  -d '{"query": "How to use Button component?"}'

GET /api/retrieve/component/{componentName}

curl http://localhost:3000/api/retrieve/component/Button

Generation Endpoints

POST /api/generate

curl -X POST http://localhost:3000/api/generate \
  -H "Content-Type: application/json" \
  -d '{"query": "How do I customize Button styling?"}'

POST /api/generate/component/{componentName}

curl -X POST http://localhost:3000/api/generate/component/Button \
  -H "Content-Type: application/json" \
  -d '{"query": "How to customize styling?"}'

System Endpoints

GET /api/status - System health and statistics POST /api/status/reindex - Force reindex all documents GET /api/status/components - List available components

๐Ÿ“– API Documentation

Interactive Swagger documentation is available at:

http://localhost:3000/api-docs

๐Ÿ—๏ธ Architecture

src/
โ”œโ”€โ”€ index.ts              # Main Express server
โ”œโ”€โ”€ api/                  # API route handlers
โ”‚   โ”œโ”€โ”€ retrieve.ts       # Document retrieval endpoints
โ”‚   โ”œโ”€โ”€ generate.ts       # Answer generation endpoints
โ”‚   โ””โ”€โ”€ status.ts         # System status endpoints
โ”œโ”€โ”€ rag/                  # RAG pipeline components
โ”‚   โ”œโ”€โ”€ pipeline.ts       # Main RAG orchestrator
โ”‚   โ”œโ”€โ”€ embedder.ts       # Text embedding using @xenova/transformers
โ”‚   โ”œโ”€โ”€ loader.ts         # Document loading and chunking
โ”‚   โ”œโ”€โ”€ vectorStore.ts    # LanceDB vector database
โ”‚   โ”œโ”€โ”€ retriever.ts      # Similarity search and ranking
โ”‚   โ””โ”€โ”€ generator.ts      # LLM answer generation via Ollama
โ””โ”€โ”€ config/               # Configuration and utilities
    โ”œโ”€โ”€ modelConfig.ts    # Model and system configuration
    โ”œโ”€โ”€ logger.ts         # Winston logging setup
    โ””โ”€โ”€ swagger.ts        # OpenAPI documentation

๐Ÿ”„ Development

Development mode with auto-reload:

npm run dev

Build TypeScript:

npm run build

Run tests:

npm test

๐Ÿ“Š Performance

The system includes comprehensive logging for performance monitoring:

  • Embedding Generation: Time to generate embeddings
  • Vector Search: Similarity search performance
  • LLM Generation: Answer generation timing
  • End-to-End: Complete RAG pipeline timing

๐Ÿ”ง Customization

Adding New Embedding Models

Update the EMBEDDING_MODEL environment variable:

EMBEDDING_MODEL=Xenova/all-MiniLM-L6-v2
# or
EMBEDDING_MODEL=Xenova/all-mpnet-base-v2

Using Different LLMs

Change the Ollama model:

MODEL=llama3:70b
# or
MODEL=mistral
# or
MODEL=codellama

Adjusting Chunk Settings

Optimize for your document structure:

CHUNK_SIZE=1500          # Larger chunks for more context
CHUNK_OVERLAP=300        # More overlap for better continuity
TOP_K_RESULTS=10         # More context documents

๐Ÿšจ Troubleshooting

Common Issues

  1. Ollama Connection Error

    • Ensure Ollama is running: ollama serve
    • Check the base URL in .env
  2. Model Not Found

    • Pull the model: ollama pull llama3
    • Verify available models: ollama list
  3. Memory Issues

    • Reduce CHUNK_SIZE and TOP_K_RESULTS
    • Use smaller embedding models
  4. Slow Performance

    • Use quantized models
    • Reduce vector dimensions
    • Optimize chunk sizes

Logs

Check logs for detailed error information:

tail -f logs/combined.log
tail -f logs/error.log

๐Ÿ”ฎ Future Enhancements

  • MCP Provider: Adapt for Model Context Protocol
  • Multiple Vector Stores: Support for different databases
  • Hybrid Search: Combine semantic and keyword search
  • Caching: Redis for response caching
  • Authentication: API key management
  • Rate Limiting: Request throttling
  • Monitoring: Metrics and alerting

๐Ÿ“„ License

MIT License - see LICENSE file for details.

๐Ÿค Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests
  5. Submit a pull request

Built with โค๏ธ for the React Native community

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published