raven enables querying large language models via email. It retrieves messages from an IMAP mailbox, generates replies using an OpenAI API compatible LLM, and sends them via SMTP. LLMs can optionally invoke user defined external tools.
This project is under active development and breaking changes are possible. The go-imap/v2 dependency is also still in development.
- Monitor mailboxes with IMAP IDLE or polling
- Query LLMs with an OpenAI API (hosted or local)
- Multimodal input (text and images, inline or attached)
- Configurable concurrency for serializing access to a GPU, or to allow parallel message processing
- Sender allowlist for access control
- Tools defined with YAML, parsed with Go templates, and executed as subprocesses
ravencan handle replies, but does not attempt to reconstruct the conversation history for the OpenAI API. The quoted body, if provided, is sent alongside the user’s latest message.- Tools do not support variadic arguments. The
{{ json . }}template function may serve as a workaround when a tool accepts a JSON blob. - Messages may be reprocessed if:
ravenfails to compose the reply.- SMTP send fails.
- Marking the message as seen fails.
- The process crashes during processing.
- Reply attribution (
'On <date>, <sender> wrote:') is a constant string in English. - Attachments are size-limited per MIME part (
maxPartSize= 32MB).
raven can expose its host to a variety of security risks:
- Denial-of-Service: local LLM inference consumes significant compute; hosted providers incur billing. The volume of email and size of individual emails is another avenue of attack.
- Prompt injection (direct, indirect, and crescendo attacks)
- Unexpected costs (from tokens, tools, retries)
- Data exposure, if using a hosted LLM provider, or via manipulated tools.
- Arbitrary command execution and all its consequences, depending on the tools exposed.
The primary mitigation raven offers is the sender allowlist, which assumes (a) the email provider does not deliver unauthenticated (spoofed) email to the inbox (b) senders can be trusted with access, (c) senders secure access to their accounts.
Other mitigations which a user can consider include:
- Constraining the runtime and sandboxing
raven - Sandboxing tools (jails, containers)
- Opting for narrow tools over broad ones (e.g., scripts versus bash -c).
- Monitoring and constraining resource usage or LLM API costs
- Auditing logs
- Defensive system prompts that constrain model behavior
This software is provided as-is; refer to the license.
- Go 1.25+
- An LLM server with OpenAI-compatible API (e.g., llama.cpp’s
llama-server) - IMAP/SMTP email account (or alias)
- IMAP server must support at least two connections.
From source:
git clone https://code.chimeric.al/dwrz/raven.git
cd raven
makeThe binary is written to bin/raven.
To install to $GOBIN:
make installEnvironment variables can be referenced with ${VAR} syntax.
Reference config.example.yaml for all options.
The path of least resistance is to either use a dedicated email account for raven, or use an alias.
The path to the configuration file is required:
raven -c ~/.config/raven/config.yaml # or /etc/raven/config.yamlExample systemd service files for system or user-level services are in init/systemd/.
MIT