This directory contains comprehensive examples demonstrating various features and use cases of the Browser VectorDB library.
Difficulty: Beginner
Topics: Embeddings, Transformers.js
Explore text embedding generation:
- Loading embedding models
- Generating embeddings for text
- Batch embedding operations
- Model caching strategies
node dist/examples/embedding-usage.jsDifficulty: Intermediate
Topics: RAG, LLM Integration, Context Management
Build retrieval-augmented generation workflows:
- Setting up RAG pipelines
- Document insertion and retrieval
- Context formatting
- LLM integration (wllama/WebLLM)
- Streaming responses
node dist/examples/rag-usage.jsDifficulty: Intermediate
Topics: Local LLMs, Text Generation
Work with local language models:
- wllama (WASM-based inference)
- WebLLM (WebGPU-accelerated)
- Text generation
- Streaming completions
node dist/examples/llm-usage.jsDifficulty: Intermediate
Topics: WebGPU, Accelerated Inference
Leverage WebGPU for fast inference:
- WebGPU setup and detection
- Model loading and initialization
- Chat completions
- Performance optimization
node dist/examples/webllm-usage.jsDifficulty: Intermediate
Topics: MCP Protocol, Tool Integration
Integrate with AI assistants using MCP:
- MCP tool definitions
- Tool execution
- Parameter validation
- Error handling
node dist/examples/mcp-usage.jsDifficulty: Advanced
Topics: Server Setup, Production Deployment
Build a production-ready MCP server:
- Server initialization and configuration
- Tool management
- Performance monitoring
- Integration patterns
- Best practices
node dist/examples/mcp-server-standalone.jsDifficulty: Advanced
Topics: Caching, Memory Management, Optimization
Optimize performance for production:
- LRU caching strategies
- Memory management
- Batch operations
- Progressive loading
- Performance metrics
node dist/examples/performance-usage.jsDifficulty: Advanced
Topics: CLIP, Image Embeddings, Cross-Modal Search
Implement text and image search:
- CLIP model integration
- Text-to-image search
- Image-to-text search
- Cross-modal retrieval
- Multimodal filtering
node dist/examples/multimodal-search.jsDifficulty: Advanced
Topics: Document Processing, Q&A Systems, RAG
Build intelligent document Q&A systems:
- Document chunking strategies
- Metadata management
- Citation tracking
- Confidence scoring
- Multi-document search
node dist/examples/document-qa.jsDifficulty: Advanced
Topics: Benchmarking, Performance Testing, Metrics
Run comprehensive performance benchmarks:
- Search latency across dataset sizes
- Insertion throughput measurement
- Memory usage profiling
- Cache performance analysis
- Model load time testing
- Cross-browser comparison
node dist/examples/benchmark-usage.jsType: Web Application
Features: Beautiful UI, Real-time Search, Filtering
A complete semantic search application with:
- Modern, responsive UI
- Real-time search with filters
- Document management
- Statistics dashboard
- Export/import functionality
To run:
# Serve the HTML file with a local server
npx serve examples
# Open http://localhost:3000/semantic-search-demo.htmlType: Web Application
Features: Chat Interface, Streaming, Source Citations
An interactive chatbot powered by RAG:
- Chat-style interface
- Streaming responses
- Source attribution
- Conversation history
- Customizable settings
To run:
npx serve examples
# Open http://localhost:3000/rag-chatbot-demo.htmlType: Web Application
Features: Standalone Performance Testing
Lightweight performance benchmarking without dependencies:
- Tests basic JavaScript operations
- Array and object performance
- IndexedDB operations
- Environment detection
- No build required
To run:
npx serve examples
# Open http://localhost:3000/benchmark-demo-simple.htmlType: Node.js Script
Features: Data Portability, Backup/Restore
Export and import database data:
- Export to JSON
- Import from JSON
- Backup and restore workflows
To run:
node dist/examples/export-import-usage.js# Install dependencies
npm install
# Build the library
npm run buildAll TypeScript examples need to be compiled first:
# Build the library
npm run build
# Run any example
node dist/examples/<example-name>.jsFor HTML demos, you can use any static file server:
# Option 1: Using npx serve
npx serve examples
# Option 2: Using Python
python -m http.server 8000
# Option 3: Using Node.js http-server
npx http-server examples- Read
docs/QUICKSTART.mdto understand initialization - Explore
embedding-usage.tsto learn about embeddings - Try
semantic-search-demo.htmlfor a visual understanding
- Learn RAG with
rag-usage.ts - Explore LLM integration with
llm-usage.ts - Try
rag-chatbot-demo.htmlfor interactive RAG - Study MCP integration with
mcp-usage.ts
- Master performance with
performance-usage.ts - Build multimodal apps with
multimodal-search.ts - Create Q&A systems with
document-qa.ts - Deploy with
mcp-server-standalone.ts
// See: semantic-search-demo.html
// Features: Search, filtering, metadata, persistence// See: rag-chatbot-demo.html, rag-usage.ts
// Features: Context retrieval, LLM generation, citations// See: document-qa.ts
// Features: Document chunking, Q&A, citations, confidence// See: multimodal-search.ts
// Features: Text-to-image, image-to-image, CLIP embeddings// See: mcp-server-standalone.ts, mcp-usage.ts
// Features: MCP protocol, tool execution, AI assistant integrationconst db = new VectorDB({
storage: { dbName: 'my-app' },
index: { dimensions: 384, metric: 'cosine' },
embedding: { model: 'Xenova/all-MiniLM-L6-v2', device: 'wasm' },
});const db = new VectorDB({
storage: { dbName: 'my-app' },
index: { dimensions: 384, metric: 'cosine' },
embedding: { model: 'Xenova/all-MiniLM-L6-v2', device: 'wasm' },
llm: {
provider: 'wllama',
model: 'https://huggingface.co/.../model.gguf',
},
});const db = new VectorDB({
storage: { dbName: 'my-app', maxVectors: 100000 },
index: {
dimensions: 384,
metric: 'cosine',
indexType: 'hnsw', // Faster for large datasets
},
embedding: {
model: 'Xenova/all-MiniLM-L6-v2',
device: 'webgpu', // Use GPU if available
cache: true,
},
});-
Use Batch Operations: Insert multiple documents at once
await db.insertBatch(documents); // Faster than individual inserts
-
Enable Caching: Cache embeddings and models
embedding: { cache: true }
-
Optimize Index: Choose the right index type
index: { indexType: 'hnsw' } // Better for large datasets
-
Use WebGPU: Enable GPU acceleration when available
embedding: { device: 'webgpu' }
-
Filter Early: Use metadata filters to reduce search space
await db.search({ text: query, k: 10, filter: { field: 'category', operator: 'eq', value: 'tech' } });
// Check if model is cached
embedding: { cache: true }
// Use smaller models for testing
embedding: { model: 'Xenova/all-MiniLM-L6-v2' } // 23MB// Limit vector count
storage: { maxVectors: 10000 }
// Use progressive loading
// See: performance-usage.ts// Fallback to WASM
embedding: { device: 'wasm' }// Export and clear old data
const data = await db.export();
await db.clear();Want to add an example? Follow these guidelines:
- Clear Purpose: Each example should demonstrate specific features
- Well Commented: Explain what each section does
- Error Handling: Show proper error handling patterns
- Best Practices: Demonstrate recommended approaches
- Self-Contained: Examples should run independently
- Start Simple: Begin with basic examples and gradually move to advanced ones
- Experiment: Modify examples to understand how things work
- Read Comments: Examples are heavily commented for learning
- Check Console: Many examples log detailed information
- Use Demos: Interactive HTML demos help visualize concepts
Looking for complete project examples? Check out:
- Semantic Search App:
semantic-search-demo.html - RAG Chatbot:
rag-chatbot-demo.html - MCP Server:
mcp-server-standalone.ts
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Documentation: Full Docs
Happy coding! 🚀