Skip to content

Latest commit

 

History

History
287 lines (216 loc) · 11.7 KB

File metadata and controls

287 lines (216 loc) · 11.7 KB

Eclaire Logo

ECLAIRE

Privacy-focused AI assistant for your data

License Release Docs Watch demo

Eclaire demo preview (click to watch on YouTube)
Click to watch on YouTube

FeaturesInstallationSelecting ModelsArchitectureDevelopmentContributingDocsAPI


⚠️ Important Notices

Important

Pre-release / Development Status
Eclaire is currently in pre-release and under active development.
Expect frequent updates, breaking changes, and evolving APIs/configuration.
If you deploy it, please backup your data regularly and review release notes carefully before upgrading.

Warning

Security Warning
Do NOT expose Eclaire directly to the public internet.
This project is designed to be self-hosted with privacy and security in mind, but it is not hardened for direct exposure.

We strongly recommend placing it behind additional security layers such as:


Description

Eclaire is a local-first, open-source AI that organizes, answers, and automates across tasks, notes, documents, photos, bookmarks and more.

There are are lot of existing frameworks and libraries enabling various AI capabilities; few deliver a complete product allowing users to get things done. Eclaire assembles proven building blocks into a cohesive, privacy-preserving solution you can run yourself.

With AI gaining rapid adoption, there is a growing need for alternatives to closed ecosystems and hosted models, especially for personal, private, or otherwise sensitive data.

  • Self-hosted - runs entirely on your hardware with local models and data storage
  • Unified data - one place where AI can see and connect everything
  • AI-powered - content understanding, search, classification, OCR, and automation
  • Open source - transparent, extensible, and community-driven

What's New in v0.6.0

  • Unified deployment: frontend + backend + workers can run in a single container
  • Simplified Self-Hosting - new one-command setup.sh flow, plus a streamlined compose.yaml
  • Better AI Support - New vision models (including Qwen3-VL), llama.cpp router, improved MLX support.
  • Modern Frontend - Migrated from Next.js to Vite + TanStack Router
  • SQLite Support: Full SQLite database support alongside Postgres for simpler workloads
  • Database Queue Mode: Support Postgres or SQLite for job processing instead of Redis/BullMQ
  • New Admin CLI - Manage your instance from the command line

See the CHANGELOG for full details.

Features

  • Cross-platform: macOS, Linux and Windows.
  • Private by default: By default all AI models run locally, all data is stored locally.
  • Unified data: Manage across tasks, notes, documents, photos, bookmarks and more.
  • AI conversations: chat with context from your content; see sources for answers; supports streaming and thinking tokens.
  • AI tool calling: The assistant has tools to search data, open content, resolve tasks, add comments, create notes, and more
  • Flexible deployment: Run as a single unified container or separate services. SQLite or Postgres. Database queue or Redis. (See Architecture section below.)
  • Full API: OpenAI-compatible REST endpoints with session tokens or API keys. API Docs
  • Model backends: works with llama.cpp, vLLM, mlx-lm/mlx-vlm, LM Studio, Ollama, and more via the standard OpenAI-compatible API. (See Selecting Models.)
  • Model support: text and vision models from Qwen, Gemma, DeepSeek, Mistral, Kimi, and others. (See Selecting Models.)
  • Storage: all assets (uploaded or generated) live in Postgres or file/object storage.
  • Integrations: Telegram (more channels coming).
  • Documents: PDF, DOC/DOCX, PPT/PPTX, XLS/XLSX, ODT/ODP/ODS, MD, TXT, RTF, Pages, Numbers, Keynote, HTML, CSV, and more.
  • Photos/Images: JPG/JPEG, PNG, SVG, WebP, HEIC/HEIF, AVIF, GIF, BMP, TIFF, and more.
  • Tasks: track user tasks or assign tasks for the AI assistant to complete; the assistant add comments to tasks or write to separate docs.
  • Notes: plain text or Markdown format. Links to other assets.
  • Bookmarks: Fetches bookmarks and creates PDF, Readable and LLM friendly versions. Special handling for Github and Reddit APIs and metadata.
  • Organization: Tags, pin, flag, due dates, etc. across all asset types.
  • Hardware acceleration: takes advantage of Apple MLX, NVIDIA CUDA, and other platform-specific optimizations.
  • Mobile & PWA: installable PWA; iOS & Apple Watch via Shortcuts; Android via Tasker/MacroDroid.

Sample use cases

  • Dictate notes using Apple Watch (or other smartwatch).
  • Save bookmarks to read later; generate clean “readable” and PDF versions.
  • Create readable and PDF versions of websites
  • Extract text from photos and document images (OCR).
  • Bulk-convert photos from HEIC to JPG.
  • Analyze, categorize, and search documents and photos with AI.
  • Create LLM-friendly text/Markdown versions of documents and bookmarks.
  • Save interesting content (web pages, photos, documents) from phone, tablet, or desktop.
  • Ask AI to find or summarize information across your data.
  • Schedule automations (e.g., “Every Monday morning, summarize my tasks for the week.”).
  • Chat with AI from web, mobile, Telegram, and other channels.
  • Process sensitive information (bank, health, etc.) privately on local models.
  • De-clutter your desktop by bulk-uploading and letting AI sort and tag.
  • Migrate data from Google/Apple and other vendors into an open, self-hosted platform under your control.

Screenshots

Dashboard View Photo OCR
Main Dashboard AI Assistant

Installation

Prerequisites

  • Docker and Docker Compose
  • A local LLM server - llama.cpp recommended

Quick Start

mkdir eclaire && cd eclaire
curl -fsSL https://raw.githubusercontent.com/eclaire-labs/eclaire/main/setup.sh | sh

The script will:

  1. Download configuration files
  2. Generate secrets automatically
  3. Initialize the database (PostgreSQL)

After setup completes:

# 1. Start your LLM servers (in separate terminals)
#    Models download automatically on first run if not already cached
llama-server -hf unsloth/Qwen3-14B-GGUF:Q4_K_XL --ctx-size 16384 --port 11500
llama-server -hf unsloth/gemma-3-4b-it-qat-GGUF:Q4_K_XL --ctx-size 16384 --port 11501

# 2. Start Eclaire
docker compose up -d

Open http://localhost:3000 and click "Sign up" to create your account.

See AI Model Configuration to use other AI providers and models.

Configuration

Configuration lives in two places:

  • .env - secrets, database settings, ports
  • config/ai/ - LLM provider URLs and model definitions

Stopping

docker compose down

Selecting Models

Eclaire uses AI models for two purposes:

  • Backend: Powers the chat assistant (requires good tool calling)
  • Workers: Processes documents and images (requires vision capability)

Apple Silicon: Mac users can leverage MLX for optimized local inference. See the configuration guide for details.

Use the CLI to manage models:

docker compose run --rm eclaire model list

See AI Model Configuration for detailed setup and model recommendations.

Architecture

Eclaire follows a modular architecture with clear separation between the frontend, backend API, background workers, and data layers.

📋 View detailed architecture diagram →

Key Components

  • Frontend: Vite web application with React 19, TanStack Router, and Radix UI
  • Backend API: Node.js/Hono server with REST APIs
  • Background Workers: Job processing and scheduling (runs unified with backend by default)
  • Data Layer: PostgreSQL (recommended) or SQLite for persistence; database or Redis for job queue
  • AI Services: Local LLM backends (llama.cpp, MLX, LM Studio, etc.) for inference; Docling for document processing
  • External Integrations: GitHub and Reddit APIs for bookmark fetching

Roadmap

  • Support for more data sources and integrations
  • More robust full text indexing and search
  • Better extensibility and plugin system
  • Improved AI capabilities and model support
  • Evals for models and content pipelines
  • More hardening and security
  • Top requests from the community

Development

For contributors who want to build from source.

Additional Prerequisites

Beyond Docker and an LLM server, you'll need:

  • Node.js ≥ 24 with corepack enabled
  • pnpm (managed via corepack)

Document/image processing tools:

macOS:

brew install --cask libreoffice
brew install poppler graphicsmagick imagemagick ghostscript libheif

Ubuntu/Debian:

sudo apt-get install libreoffice poppler-utils graphicsmagick imagemagick ghostscript libheif-examples

Setup

git clone https://github.com/eclaire-labs/eclaire.git
cd eclaire
corepack enable
pnpm setup:dev
pnpm dev

Access the application:

Building Docker Locally

To build and test custom Docker images:

./scripts/build.sh
docker compose -f compose.yaml -f compose.dev.yaml -f compose.local.yaml up -d

Contributing

We 💙 contributions! Please read the Contributing Guide.

Security

See SECURITY.md for our policy.

Telemetry

There should be no telemetry in the Eclaire code although 3rd party dependencies may have. If you find an instance where that is the case, let us know.

Community & Support

Issues: GitHub Issues