Skip to content

Codegass/N0Mail

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

32 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

N0Mail (M2 Alpha)

AI Email Assistant

Goals (M2 Alpha)

  • Automatic Briefing: Automatically generate a Markdown briefing (brief.md) daily at 08:00 and save it to the local directory ~/.n0mail/briefs/.
  • Manual Trigger: Generate the briefing on demand using n0mail brief run [--date], with a 15-minute cache to ensure idempotency.
  • Interactive Chat: Use the CLI n0mail chat REPL: Perform RAG search on local emails → Stream answers using GPT-4o.
  • Offline Storage: Store metadata, body, labels, summaries, and embeddings of the last 45 days of emails locally using SQLite + ChromaDB.

Features (M2 Alpha)

  • F-1: Gmail OAuth: n0mail auth google completes the PKCE flow → saves the token in keyring.
  • F-2: Email Sync: n0mail sync run uses historyId (default) or a date range (--days) for incremental fetching, or --full for fetching the latest emails. Writes to the DB and generates embeddings stored in ChromaDB.
  • F-3: Zero-Shot Classification: n0mail process classify uses GPT-4o function-call → writes the label field back (processes all unclassified emails by default).
  • F-4: Email Summarization: n0mail process summarize uses GPT-4o for summarization → writes the summary field back (optionally skips Bulk/Promo).
  • F-5: Briefing Composition: n0mail brief compose generates a briefing based on local data rules / n0mail brief generate uses OpenAI to generate the briefing.
  • F-6: Automatic Generation: Cron (n0mail cron enable) → calls brief run --today.
  • F-7: CLI Interaction: n0mail chat: RAG retrieval → GPT-4o Stream.
  • F-8: Command Completion: /open id, /copy, /retry.
  • F-9: Caching Strategy: brief_cache table.
  • F-10: Database Inspection: n0mail db stats shows statistics for SQLite and VectorDB (ChromaDB).

Usage

  1. Install Dependencies:

    pip install poetry
    poetry install # Includes markdownify, beautifulsoup4
  2. Configuration:

    • Download the OAuth client ID (credentials.json file for Desktop application type) from the Google Cloud Console and rename or save it as client_secret_....json in the project root directory.
    • Create a .env file in the project root directory and add your API key (depending on the provider you choose):
      # --- OpenAI (Default) ---
      # OPENAI_API_KEY_N0MAIL="sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
      
      # --- Ollama ---
      # LLM_PROVIDER=ollama 
      # EMBEDDING_PROVIDER=ollama
      # OLLAMA_HOST="http://localhost:11434" # Ollama service address
      # # Specify Ollama models (example)
      # CLASSIFY_DEFAULT_MODEL=llama3:8b
      # SUMMARIZE_DEFAULT_MODEL=llama3:8b
      # EMBEDDING_DEFAULT_MODEL=nomic-embed-text # Ensure it's pulled
      # BRIEF_DEFAULT_MODEL=llama3:instruct
      # CHAT_DEFAULT_MODEL=llama3:instruct
      
      # --- Can also mix providers ---
      # LLM_PROVIDER=openai
      # EMBEDDING_PROVIDER=ollama 
      # OPENAI_API_KEY_N0MAIL="sk-xxxxxxxx"
      # EMBEDDING_DEFAULT_MODEL=nomic-embed-text
      # OLLAMA_HOST="http://localhost:11434"
    • (Important) Ensure client_secret_....json and .env* are added to your .gitignore file.
    • (Environment Variable Explanation):
      • LLM_PROVIDER: Sets the provider for chat, classification, summarization, and briefing generation. Supported: openai (default), ollama.
      • EMBEDDING_PROVIDER: Sets the provider for generating embeddings. Supported: openai (default), ollama. Defaults to LLM_PROVIDER if not set.
      • OPENAI_API_KEY_N0MAIL: OpenAI API key (if using openai provider).
      • OLLAMA_HOST: Ollama service address (if using ollama provider), default http://localhost:11434.
      • CLASSIFY_DEFAULT_MODEL: Default model for classification (default: gpt-4o-mini).
      • SUMMARIZE_DEFAULT_MODEL: Default model for summarization (default: gpt-4o-mini).
      • EMBEDDING_DEFAULT_MODEL: Default model for embedding (OpenAI default: text-embedding-3-small, Ollama requires specification).
      • BRIEF_DEFAULT_MODEL: Default model for briefing generation (default: gpt-4o).
      • CHAT_DEFAULT_MODEL: Default model for chat (default: gpt-4o).
      • CHAT_MODEL_THINK_MODE: Set to true to enable live thinking output (<think>...</think> tags) from the chat model during ReAct steps (default: false).
      • DETAILED_ACTION_HISTORY: Set to true to use more detailed (and token-heavy) message history in the ReAct action phase for chat (default: false).
      • (Command-line options like --model or --embed-model override these defaults).
  3. Run Commands (using poetry run n0mail <command>):

    # --- Help --- 
    poetry run n0mail --help
    poetry run n0mail auth --help
    poetry run n0mail sync --help
    poetry run n0mail process --help
    poetry run n0mail brief --help
    
    # --- Authentication --- 
    # Run for the first time for Google authorization
    poetry run n0mail auth google
    # Force re-authorization
    poetry run n0mail auth google --force
    
    # --- Sync --- 
    # Incremental sync (default mode, based on last record)
    poetry run n0mail sync run 
    # Sync emails from the past 7 days (max 3000)
    poetry run n0mail sync run --days 7
    # Sync emails from the past 3 days, process max 100 emails
    poetry run n0mail sync run --days 3 --max-emails 100
    # Force full sync of the latest 3000 emails (ignores days and history)
    poetry run n0mail sync run --full
    # Force full sync of the latest 50 emails
    poetry run n0mail sync run --full --max-emails 50
    # Sync without generating embeddings
    poetry run n0mail sync run --no-embed
    # Specify chunk size and overlap for embedding text splitting
    poetry run n0mail sync run --chunk-size 8000 --chunk-overlap 100
    
    # --- Process (Requires OpenAI Key) --- 
    # Classify all unclassified emails
    poetry run n0mail process classify
    # Limit number, force reclassify, specify model
    poetry run n0mail process classify --max-emails 10 --reclassify --model gpt-4o-mini
    
    # Summarize all unsummarized emails (skips Bulk/Promo by default)
    poetry run n0mail process summarize
    # Limit number, force resummarize, specify model
    poetry run n0mail process summarize --max-emails 10 --resummarize --model gpt-4o-mini
    # Summarize emails, do not skip Bulk/Promo
    poetry run n0mail process summarize --no-skip-bulk
    
    # --- Briefing --- 
    # Generate briefing for the past 1 day and print (rule-based)
    poetry run n0mail brief compose
    # Generate briefing for the past 3 days and save to file
    poetry run n0mail brief compose --days 3 --output ~/briefs/$(date +%Y-%m-%d)-brief.md 
    
    # Generate briefing for the past 1 day and print (using OpenAI, requires API Key)
    poetry run n0mail brief generate
    # Use AI to generate briefing for past 3 days, include Bulk/Promo, use gpt-4o-mini model, save to file
    poetry run n0mail brief generate --days 3 --include-bulk --model gpt-4o-mini --output ~/briefs/$(date +%Y-%m-%d)-ai-brief.md
    # Use AI to generate briefing, limit emails sent to AI to 15
    poetry run n0mail brief generate --max-emails 15
    
    # --- Interactive Chat (Requires Provider Config) ---
    # Start chat session (uses configured default model)
    poetry run n0mail chat
    # Specify chat and embedding models, enable Debug output
    poetry run n0mail chat --chat-model llama3:instruct --embedding-model nomic-embed-text --debug
    # Specify number of days for initial briefing
    poetry run n0mail chat --brief-days 7
    # Enable detailed history for ReAct action phase (uses more tokens)
    poetry run n0mail chat --detailed-history
    # (Enter /quit or /exit in chat to leave, /help to see available commands - if implemented)
    
    # --- Database Inspection  ---
    # Show statistics for SQLite and VectorDB (ChromaDB)
    poetry run n0mail db stats
    
    # --- Other --- 
    # Check version
    poetry run n0mail version

Development

  • Run tests: poetry run pytest
  • Lint/Format code: poetry run ruff check . --fix and poetry run ruff format .

About

AI Based Email application

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 2

  •  
  •