Skip to content
This repository was archived by the owner on Feb 21, 2026. It is now read-only.

chimerical-llc/raven

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

raven

raven enables querying large language models via email. It retrieves messages from an IMAP mailbox, generates replies using an OpenAI API compatible LLM, and sends them via SMTP. LLMs can optionally invoke user defined external tools.

This project is under active development and breaking changes are possible. The go-imap/v2 dependency is also still in development.

Features

  • Monitor mailboxes with IMAP IDLE or polling
  • Query LLMs with an OpenAI API (hosted or local)
  • Multimodal input (text and images, inline or attached)
  • Configurable concurrency for serializing access to a GPU, or to allow parallel message processing
  • Sender allowlist for access control
  • Tools defined with YAML, parsed with Go templates, and executed as subprocesses

Limitations

  • raven can handle replies, but does not attempt to reconstruct the conversation history for the OpenAI API. The quoted body, if provided, is sent alongside the user’s latest message.
  • Tools do not support variadic arguments. The {{ json . }} template function may serve as a workaround when a tool accepts a JSON blob.
  • Messages may be reprocessed if:
    • raven fails to compose the reply.
    • SMTP send fails.
    • Marking the message as seen fails.
    • The process crashes during processing.
  • Reply attribution ('On <date>, <sender> wrote:') is a constant string in English.
  • Attachments are size-limited per MIME part (maxPartSize = 32MB).

Security

raven can expose its host to a variety of security risks:

  • Denial-of-Service: local LLM inference consumes significant compute; hosted providers incur billing. The volume of email and size of individual emails is another avenue of attack.
  • Prompt injection (direct, indirect, and crescendo attacks)
  • Unexpected costs (from tokens, tools, retries)
  • Data exposure, if using a hosted LLM provider, or via manipulated tools.
  • Arbitrary command execution and all its consequences, depending on the tools exposed.

The primary mitigation raven offers is the sender allowlist, which assumes (a) the email provider does not deliver unauthenticated (spoofed) email to the inbox (b) senders can be trusted with access, (c) senders secure access to their accounts.

Other mitigations which a user can consider include:

  • Constraining the runtime and sandboxing raven
  • Sandboxing tools (jails, containers)
  • Opting for narrow tools over broad ones (e.g., scripts versus bash -c).
  • Monitoring and constraining resource usage or LLM API costs
  • Auditing logs
  • Defensive system prompts that constrain model behavior

This software is provided as-is; refer to the license.

Requirements

  • Go 1.25+
  • An LLM server with OpenAI-compatible API (e.g., llama.cpp’s llama-server)
  • IMAP/SMTP email account (or alias)
    • IMAP server must support at least two connections.

Installation

From source:

git clone https://code.chimeric.al/dwrz/raven.git
cd raven
make

The binary is written to bin/raven.

To install to $GOBIN:

make install

Configuration

Environment variables can be referenced with ${VAR} syntax.

Reference config.example.yaml for all options.

The path of least resistance is to either use a dedicated email account for raven, or use an alias.

Deployment

The path to the configuration file is required:

raven -c ~/.config/raven/config.yaml # or /etc/raven/config.yaml

Example systemd service files for system or user-level services are in init/systemd/.

License

MIT

About

Query LLMs by email. This project has moved to https://code.chimeric.al/chimerical/raven.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors