A powerful command-line interface for interacting with various Large Language Models (LLMs) including OpenAI, Anthropic Claude, Google Gemini, and local models via Ollama.
- Overview
- Features
- Installation
- Quick Start
- Architecture
- Command-Line Interface
- Models Module
- Agents Module
- Configuration
- Usage Examples
- Development
- Troubleshooting
- License
dejavu2-cli
(also available as dv2
) is a versatile command-line tool for executing queries to multiple language models from various providers. It provides a unified interface for AI interactions with support for conversation history, contextual inputs, customizable parameters, and agent templates.
Current Version: 0.8.30
- Multi-Provider Support: Seamlessly interact with models from OpenAI, Anthropic, Google, Meta, xAI, and local Ollama instances
- Unified Interface: Single CLI for all LLM providers with consistent parameter handling
- Conversation Management: Maintain context across sessions with persistent conversation history
- Agent Templates: Pre-configured AI personas with specialized capabilities
- Context Enhancement: Include reference files and knowledgebases in queries
- Security-First Design: Built-in input validation and secure subprocess execution
- Model Registry: Comprehensive database of 100+ models with aliases and metadata
- Smart Parameter Handling: Automatic adjustment for model-specific requirements (e.g., O1/O3 models)
- Multiple Output Formats: Export conversations to markdown, view in various formats
- Robust Error Handling: Graceful degradation with meaningful error messages
- Extensible Architecture: Modular design for easy feature additions
- Python 3.7 or higher (3.8+ recommended)
- pip package manager
- Git (for cloning the repository)
- API keys for desired LLM providers
-
Clone the Repository
git clone https://github.com/Open-Technology-Foundation/dejavu2-cli.git cd dejavu2-cli
-
Install Dependencies
pip install -r requirements.txt
-
Set Up API Keys
# For Anthropic models (Claude) export ANTHROPIC_API_KEY="your_anthropic_api_key" # For OpenAI models (GPT, O1, O3) export OPENAI_API_KEY="your_openai_api_key" # For Google models (Gemini) export GOOGLE_API_KEY="your_google_api_key" # Ollama models work without API key (local server)
-
Verify Installation
./dv2 --version
# Basic query using default model (Claude Sonnet)
dv2 "Explain quantum computing in simple terms"
# Use a specific model by alias
dv2 "Write a haiku about coding" -m gpt4o
# Use an agent template for specialized tasks
dv2 "Debug this Python code: print(1/0)" -T leet
# Continue a conversation
dv2 "What are its practical applications?" -c
# Include reference files
dv2 "Summarize this document" -r report.pdf,data.csv
The codebase follows a modular architecture with clear separation of concerns:
Core Modules:
├── main.py # CLI entry point and orchestration
├── llm_clients.py # LLM provider API integrations
├── conversations.py # Conversation history management
├── models.py # Model registry and selection
├── templates.py # Agent template management
├── context.py # Reference files and knowledgebases
├── config.py # Configuration loading and management
├── security.py # Input validation and secure execution
├── errors.py # Custom exception hierarchy
├── display.py # Output formatting and status display
├── utils.py # Utility functions
└── version.py # Version information
Handles API interactions with different providers:
- OpenAI: Standard and O1/O3 model support with parameter adjustments
- Anthropic: Claude models with native API integration
- Google: Gemini models via generativeai library
- Ollama: Local and remote server support with robust response parsing
Persistent conversation storage with:
- JSON-based storage in
~/.config/dejavu2-cli/conversations/
- Message history with metadata tracking
- Export capabilities to markdown format
- Message manipulation (removal, pair deletion)
Comprehensive security features:
- Input validation for queries and file paths
- Secure subprocess execution with whitelisting
- Command injection prevention
- Configurable security policies
dv2 [QUERY] [OPTIONS]
Option | Description |
---|---|
-m, --model MODEL |
Select model by name or alias (e.g., gpt4o , sonnet ) |
-T, --template NAME |
Use an agent template (e.g., "Coder - Software Expert" ) |
-t, --temperature FLOAT |
Set creativity level (0.0-1.0) |
-M, --max-tokens INT |
Maximum response length |
-s, --systemprompt TEXT |
Custom system instructions |
Option | Description |
---|---|
-c, --continue |
Continue the most recent conversation |
-C, --conversation ID |
Continue a specific conversation |
-n, --new-conversation |
Force start a new conversation |
-x, --list-conversations |
List all saved conversations |
-e, --export-conversation ID |
Export conversation to markdown |
-W, --list-messages ID |
Show all messages in a conversation |
--remove-message ID INDEX |
Remove a specific message |
--remove-pair ID INDEX |
Remove a user-assistant message pair |
Option | Description |
---|---|
-r, --reference FILES |
Include reference files (comma-separated) |
-k, --knowledgebase NAME |
Use a knowledgebase for context |
-Q, --knowledgebase-query |
Custom query for knowledgebase |
Option | Description |
---|---|
-S, --status |
Display current configuration |
-a, --list-models |
List available models |
-l, --list-template NAME |
Show template details |
-K, --list-knowledge-bases |
List available knowledgebases |
-E, --edit-templates |
Edit Agents.json |
-D, --edit-defaults |
Edit defaults.yaml |
The Models module maintains a comprehensive registry of AI models across all supported providers.
Each model entry contains:
- Identification: model ID, alias, provider, family
- Capabilities: context window, max tokens, vision support
- Availability: enabled/available status (0-9 scale)
- Metadata: descriptions, training dates, pricing
Example model entry:
"claude-3-7-sonnet-latest": {
"model": "claude-3-7-sonnet-latest",
"alias": "sonnet",
"parent": "Anthropic",
"model_category": "LLM",
"context_window": 200000,
"max_output_tokens": 128000,
"vision": 1,
"available": 9,
"enabled": 1
}
Advanced querying and filtering:
# List all enabled models
./Models/dv2-models-list
# Filter by provider
./Models/dv2-models-list -F "parent:equals:OpenAI"
# Complex queries
./Models/dv2-models-list -F "context_window:>:100000" -F "vision:equals:1"
# Export formats
./Models/dv2-models-list -o json
./Models/dv2-models-list -o table -col model,alias,context_window
Claude-powered intelligent updates:
# Update all providers
./Models/dv2-models-update --all
# Update specific provider
./Models/dv2-models-update --provider anthropic
# Dry run mode
./Models/dv2-models-update --all --dry-run
Models can be selected by:
- Full model ID:
claude-3-7-sonnet-latest
- Alias:
sonnet
- Partial match:
gpt4
(matchesgpt-4o
)
The Agents module provides pre-configured AI personas with specialized capabilities.
Agents are organized by category:
- General: Multi-purpose assistants
- Specialist: Domain experts (coding, legal, medical, etc.)
- Edit-Summarize: Content processing specialists
Each agent defines:
"Leet - Full-Stack Programmer": {
"category": "Specialist",
"systemprompt": "You are Leet, an expert full-stack programmer...",
"model": "claude-3-7-sonnet-latest",
"max_tokens": 8000,
"temperature": 0.35,
"monospace": true,
"available": 9,
"enabled": 9
}
# List all agents
./Agents/dv2-agents list
# View specific agent
./Agents/dv2-agents list "Leet"
# List by category
./Agents/dv2-agents list -c Specialist
# Create new agent
./Agents/dv2-agents insert "NewAgent - Description" \
--model claude-3-7-sonnet-latest --temperature 0.7
# Edit existing agent
./Agents/dv2-agents edit "Leet"
Agents are selected via the -T
option:
# Use the Leet programmer for code review
dv2 "Review this Python function for security issues" -T leet -r code.py
# Use the Legal specialist
dv2 "Explain this contract clause" -T "Legal - Law and Regulations"
# Use the Editor for improving text
dv2 "Improve this paragraph" -T Editor
Note: when specifying an agent with the -T|--template option, only the Agent key is required. It is non-case sensitive. To get a list of Agent template keys: jq -r 'keys[]' Agents.json | cut -d' ' -f1
-
defaults.yaml - System defaults
defaults: template: Dejavu2 model: sonnet temperature: 0.1 max_tokens: 4000 security: subprocess: timeout: 30.0 allowed_editors: ["nano", "vim", "vi", "emacs", "joe"]
-
User Configuration -
~/.config/dejavu2-cli/config.yaml
- Overrides system defaults
- User-specific settings
-
Models.json - Model registry
- Comprehensive model database
- Provider configurations
-
Agents.json - Agent templates
- Pre-configured AI personas
- Specialized system prompts
- Command-line arguments (highest priority)
- Agent template settings
- User configuration
- System defaults (lowest priority)
# Simple question
dv2 "What is the capital of France?"
# Creative writing with high temperature
dv2 "Write a short story about AI" -t 0.9
# Technical analysis with low temperature
dv2 "Explain TCP/IP networking" -m opus -t 0.1
# Analyze code
dv2 "Review this code for bugs" -r main.py,utils.py
# Summarize documents
dv2 "Summarize these reports" -r report1.pdf,report2.docx -T Summary
# Compare files
dv2 "What are the differences between these configs?" -r old.yaml,new.yaml
# Start a titled conversation
dv2 "Let's discuss machine learning" --title "ML Discussion"
# Continue conversation
dv2 "What about neural networks?" -c
# Export conversation
dv2 -e current -f ml_discussion.md
# Review conversation history
dv2 -x
dv2 -W 550e8400-e29b-41d4-a716-446655440000
# Note: when specifying an agent with the -T|--template option, only the
# Agent key is required. It is non-case sensitive.
# To get a list of Agent template keys: jq -r 'keys[]' Agents.json | cut -d' ' -f1
# Use the Dejavu2 general assistant
dv2 "Help me understand this concept" -T dejavu2
# Use specialized agents for domain tasks
dv2 "Get business advice for Indonesia" -T askokusi
dv2 "Find bugs in this code" -T leet -r app.js
dv2 "Diagnose these symptoms" -T diffdiagnosis
# Content processing agents
dv2 "Improve this text" -T subeditor -r draft.txt
dv2 "Summarize this report" -T summariser -r report.pdf
dv2 "Convert to markdown" -T text2md -r document.txt
# Creative and specialized agents
dv2 "Create a short video idea" -T vazz
dv2 "Interview me about my life" -T bio
dv2 "Write a children's story" -T charlesdodgson
dv2 "Create a Twitter post" -T x_post
# Other useful agents
dv2 "Get factual answers" -T Virgo
dv2 "Translate this text" -T trans -r document.txt
dv2 "Ask with humor" -T Sarki
# Combine agents with specific parameters
dv2 "Debug this Python code" -T Leet -m opus -t 0.2 -r buggy_code.py
The customkb
integration allows you to query vector databases for enhanced context:
# Basic knowledgebase query
dv2 "What is our coding standard?" -k "engineering_docs"
# Specify custom query for the knowledgebase
dv2 "Explain the deployment process" -k "devops_kb" -Q "kubernetes deployment procedures"
# Combine knowledgebase with agent templates
dv2 "Review this code against our standards" -T leet -k "coding_standards" -r new_feature.py
# Multiple context sources
dv2 "Is this compliant with our policies?" -k "company_policies" -r proposal.pdf
Available Knowledgebases:
- List all available knowledgebases:
dv2 -K
- Knowledgebases are stored in
/var/lib/vectordbs/
- Each KB has a configuration file defining its content and indexing
# Chain commands with context
dv2 "Analyze this data" -r data.csv | dv2 "Create a summary report" -c
# Complex multi-context query
dv2 "Review architecture" -T DevOps -k "best_practices" -r architecture.md -m opus
# Export formatted conversation
dv2 "Let's design a system" -T Architect -c | dv2 -e current -O > design_discussion.md
dejavu2-cli/
├── Core Modules
│ ├── main.py # CLI orchestration
│ ├── llm_clients.py # Provider integrations
│ ├── conversations.py # History management
│ ├── models.py # Model selection
│ ├── templates.py # Agent management
│ ├── context.py # Reference handling
│ ├── config.py # Configuration
│ ├── security.py # Security layer
│ ├── errors.py # Exception hierarchy
│ └── display.py # Output formatting
├── Configuration
│ ├── defaults.yaml # System defaults
│ ├── Models.json # Model registry
│ └── Agents.json # Agent templates
├── Submodules
│ ├── Models/ # Model management tools
│ └── Agents/ # Agent management tools
└── Tests
├── unit/ # Unit tests
├── integration/ # Integration tests
└── functional/ # End-to-end tests
- Python Style: 2-space indentation, 100 char line limit
- Naming: snake_case for functions/variables, PascalCase for classes
- Documentation: Google-style docstrings for all public APIs
- Error Handling: Use custom exceptions from
errors.py
- File Endings: All Python scripts must end with
#fin
# Run all tests
./run_tests.sh
# Run specific test categories
./run_tests.sh --unit
./run_tests.sh --integration
./run_tests.sh --functional
# Run with coverage
./run_tests.sh --coverage
# Run specific test
python -m pytest tests/unit/test_security.py -v
-
New LLM Provider
- Add client class to
llm_clients.py
- Update
initialize_clients()
andquery()
functions - Add models to
Models.json
- Create provider update module in
Models/utils/dv2-update-models/providers/
- Add client class to
-
New Agent Template
- Add entry to
Agents.json
- Use
dv2-agents insert
command - Test with various queries
- Set appropriate availability/enabled levels
- Add entry to
-
New CLI Option
- Add Click option to
main.py
- Update relevant processing functions
- Add tests for new functionality
- Update documentation
- Add Click option to
API Key Problems
# Check if keys are set
dv2 --status | grep "API Keys"
# Verify specific key
echo $ANTHROPIC_API_KEY
Model Not Found
# List available models
dv2 --list-models
# Check model details
./Models/dv2-models-list -F "alias:contains:sonnet"
Conversation Issues
# List conversations
dv2 -x
# Clean up old conversations
dv2 -X [conversation-id]
# Force new conversation
dv2 "query" -n
Ollama Connection
# Check if Ollama is running
curl http://localhost:11434/api/tags
# Use remote Ollama server
export OLLAMA_HOST=https://your-server.com
# Enable verbose logging
dv2 "test query" -v
# Log to file
dv2 "test query" --log-file debug.log
# Check configuration
dv2 --status -P
This project is licensed under the GPL-3.0 License. See the LICENSE file for details.
Contributions are welcome! Please:
- Review CLAUDE.md for development guidelines
- Follow the coding standards
- Add tests for new features
- Update documentation as needed
- Issues: GitHub Issues
- Documentation: Check
docs/
directory for detailed guides - Community: Join discussions in the issues section
Current Version: 0.8.30 | Status: Active Development | Python: 3.7+
#fin