AI Email Assistant
- Automatic Briefing: Automatically generate a Markdown briefing (
brief.md) daily at 08:00 and save it to the local directory~/.n0mail/briefs/. - Manual Trigger: Generate the briefing on demand using
n0mail brief run [--date], with a 15-minute cache to ensure idempotency. - Interactive Chat: Use the CLI
n0mail chatREPL: Perform RAG search on local emails → Stream answers using GPT-4o. - Offline Storage: Store metadata, body, labels, summaries, and embeddings of the last 45 days of emails locally using SQLite + ChromaDB.
- ✅ F-1: Gmail OAuth:
n0mail auth googlecompletes the PKCE flow → saves the token inkeyring. - ✅ F-2: Email Sync:
n0mail sync runuseshistoryId(default) or a date range (--days) for incremental fetching, or--fullfor fetching the latest emails. Writes to the DB and generates embeddings stored in ChromaDB. - ✅ F-3: Zero-Shot Classification:
n0mail process classifyuses GPT-4o function-call → writes thelabelfield back (processes all unclassified emails by default). - ✅ F-4: Email Summarization:
n0mail process summarizeuses GPT-4o for summarization → writes thesummaryfield back (optionally skips Bulk/Promo). - ✅ F-5: Briefing Composition:
n0mail brief composegenerates a briefing based on local data rules /n0mail brief generateuses OpenAI to generate the briefing. - ⏳ F-6: Automatic Generation: Cron (
n0mail cron enable) → callsbrief run --today. - ✅ F-7: CLI Interaction:
n0mail chat: RAG retrieval → GPT-4o Stream. - ⏳ F-8: Command Completion:
/open id,/copy,/retry. - ✅ F-9: Caching Strategy:
brief_cachetable. - ✅ F-10: Database Inspection:
n0mail db statsshows statistics for SQLite and VectorDB (ChromaDB).
-
Install Dependencies:
pip install poetry poetry install # Includes markdownify, beautifulsoup4 -
Configuration:
- Download the OAuth client ID (
credentials.jsonfile for Desktop application type) from the Google Cloud Console and rename or save it asclient_secret_....jsonin the project root directory. - Create a
.envfile in the project root directory and add your API key (depending on the provider you choose):# --- OpenAI (Default) --- # OPENAI_API_KEY_N0MAIL="sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" # --- Ollama --- # LLM_PROVIDER=ollama # EMBEDDING_PROVIDER=ollama # OLLAMA_HOST="http://localhost:11434" # Ollama service address # # Specify Ollama models (example) # CLASSIFY_DEFAULT_MODEL=llama3:8b # SUMMARIZE_DEFAULT_MODEL=llama3:8b # EMBEDDING_DEFAULT_MODEL=nomic-embed-text # Ensure it's pulled # BRIEF_DEFAULT_MODEL=llama3:instruct # CHAT_DEFAULT_MODEL=llama3:instruct # --- Can also mix providers --- # LLM_PROVIDER=openai # EMBEDDING_PROVIDER=ollama # OPENAI_API_KEY_N0MAIL="sk-xxxxxxxx" # EMBEDDING_DEFAULT_MODEL=nomic-embed-text # OLLAMA_HOST="http://localhost:11434"
- (Important) Ensure
client_secret_....jsonand.env*are added to your.gitignorefile. - (Environment Variable Explanation):
LLM_PROVIDER: Sets the provider for chat, classification, summarization, and briefing generation. Supported:openai(default),ollama.EMBEDDING_PROVIDER: Sets the provider for generating embeddings. Supported:openai(default),ollama. Defaults toLLM_PROVIDERif not set.OPENAI_API_KEY_N0MAIL: OpenAI API key (if usingopenaiprovider).OLLAMA_HOST: Ollama service address (if usingollamaprovider), defaulthttp://localhost:11434.CLASSIFY_DEFAULT_MODEL: Default model for classification (default:gpt-4o-mini).SUMMARIZE_DEFAULT_MODEL: Default model for summarization (default:gpt-4o-mini).EMBEDDING_DEFAULT_MODEL: Default model for embedding (OpenAI default:text-embedding-3-small, Ollama requires specification).BRIEF_DEFAULT_MODEL: Default model for briefing generation (default:gpt-4o).CHAT_DEFAULT_MODEL: Default model for chat (default:gpt-4o).CHAT_MODEL_THINK_MODE: Set totrueto enable live thinking output (<think>...</think>tags) from the chat model during ReAct steps (default:false).DETAILED_ACTION_HISTORY: Set totrueto use more detailed (and token-heavy) message history in the ReAct action phase for chat (default:false).- (Command-line options like
--modelor--embed-modeloverride these defaults).
- Download the OAuth client ID (
-
Run Commands (using
poetry run n0mail <command>):# --- Help --- poetry run n0mail --help poetry run n0mail auth --help poetry run n0mail sync --help poetry run n0mail process --help poetry run n0mail brief --help # --- Authentication --- # Run for the first time for Google authorization poetry run n0mail auth google # Force re-authorization poetry run n0mail auth google --force # --- Sync --- # Incremental sync (default mode, based on last record) poetry run n0mail sync run # Sync emails from the past 7 days (max 3000) poetry run n0mail sync run --days 7 # Sync emails from the past 3 days, process max 100 emails poetry run n0mail sync run --days 3 --max-emails 100 # Force full sync of the latest 3000 emails (ignores days and history) poetry run n0mail sync run --full # Force full sync of the latest 50 emails poetry run n0mail sync run --full --max-emails 50 # Sync without generating embeddings poetry run n0mail sync run --no-embed # Specify chunk size and overlap for embedding text splitting poetry run n0mail sync run --chunk-size 8000 --chunk-overlap 100 # --- Process (Requires OpenAI Key) --- # Classify all unclassified emails poetry run n0mail process classify # Limit number, force reclassify, specify model poetry run n0mail process classify --max-emails 10 --reclassify --model gpt-4o-mini # Summarize all unsummarized emails (skips Bulk/Promo by default) poetry run n0mail process summarize # Limit number, force resummarize, specify model poetry run n0mail process summarize --max-emails 10 --resummarize --model gpt-4o-mini # Summarize emails, do not skip Bulk/Promo poetry run n0mail process summarize --no-skip-bulk # --- Briefing --- # Generate briefing for the past 1 day and print (rule-based) poetry run n0mail brief compose # Generate briefing for the past 3 days and save to file poetry run n0mail brief compose --days 3 --output ~/briefs/$(date +%Y-%m-%d)-brief.md # Generate briefing for the past 1 day and print (using OpenAI, requires API Key) poetry run n0mail brief generate # Use AI to generate briefing for past 3 days, include Bulk/Promo, use gpt-4o-mini model, save to file poetry run n0mail brief generate --days 3 --include-bulk --model gpt-4o-mini --output ~/briefs/$(date +%Y-%m-%d)-ai-brief.md # Use AI to generate briefing, limit emails sent to AI to 15 poetry run n0mail brief generate --max-emails 15 # --- Interactive Chat (Requires Provider Config) --- # Start chat session (uses configured default model) poetry run n0mail chat # Specify chat and embedding models, enable Debug output poetry run n0mail chat --chat-model llama3:instruct --embedding-model nomic-embed-text --debug # Specify number of days for initial briefing poetry run n0mail chat --brief-days 7 # Enable detailed history for ReAct action phase (uses more tokens) poetry run n0mail chat --detailed-history # (Enter /quit or /exit in chat to leave, /help to see available commands - if implemented) # --- Database Inspection --- # Show statistics for SQLite and VectorDB (ChromaDB) poetry run n0mail db stats # --- Other --- # Check version poetry run n0mail version
- Run tests:
poetry run pytest - Lint/Format code:
poetry run ruff check . --fixandpoetry run ruff format .