Skip to content

Monadical-SAS/reflector

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

image

Reflector

Reflector is an AI-powered audio transcription and meeting analysis platform that provides real-time transcription, speaker diarization, translation and summarization for audio content and live meetings. It works 100% with local models (whisper/parakeet, pyannote, seamless-m4t, and your local llm like phi-4).

Tests License: MIT

image image image image

What is Reflector?

Reflector is a web application that utilizes local models to process audio content, providing:

  • Real-time Transcription: Convert speech to text using Whisper (multi-language) or Parakeet (English) models
  • Speaker Diarization: Identify and label different speakers using Pyannote 3.1
  • Live Translation: Translate audio content in real-time to many languages with Facebook Seamless-M4T
  • Topic Detection & Summarization: Extract key topics and generate concise summaries using LLMs
  • Meeting Recording: Create permanent records of meetings with searchable transcripts

Currently we provide modal.com gpu template to deploy.

Background

The project architecture consists of three primary components:

  • Back-End: Python server that offers an API and data persistence, found in server/.
  • Front-End: NextJS React project hosted on Vercel, located in www/.
  • GPU implementation: Providing services such as speech-to-text transcription, topic generation, automated summaries, and translations.

It also uses authentik for authentication if activated.

Contribution Guidelines

All new contributions should be made in a separate branch, and goes through a Pull Request. Conventional commits must be used for the PR title and commits.

Usage

To record both your voice and the meeting you're taking part in, you need:

  • For an in-person meeting, make sure your microphone is in range of all participants.
  • If using several microphones, make sure to merge the audio feeds into one with an external tool.
  • For an online meeting, if you do not use headphones, your microphone should be able to pick up both your voice and the audio feed of the meeting.
  • If you want to use headphones, you need to merge the audio feeds with an external tool.

Permissions:

You may have to add permission for browser's microphone access to record audio in System Preferences -> Privacy & Security -> Microphone System Preferences -> Privacy & Security -> Accessibility. You will be prompted to provide these when you try to connect.

How to Install Blackhole (Mac Only)

This is an external tool for merging the audio feeds as explained in the previous section of this document. Note: We currently do not have instructions for Windows users.

  • Install Blackhole-2ch (2 ch is enough) by 1 of 2 options listed.
  • Setup "Aggregate device" to route web audio and local microphone input.
  • Setup Multi-Output device
  • Then goto System Preferences -> Sound and choose the devices created from the Output and Input tabs.
  • The input from your local microphone, the browser run meeting should be aggregated into one virtual stream to listen to and the output should be fed back to your specified output devices if everything is configured properly.

Installation

Note: we're working toward better installation, theses instructions are not accurate for now

Frontend

Start with cd www.

Installation

pnpm install
cp .env.example .env

Then, fill in the environment variables in .env as needed. If you are unsure on how to proceed, ask in Zulip.

Run in development mode

pnpm dev

Then (after completing server setup and starting it) open http://localhost:3000 to view it in the browser.

OpenAPI Code Generation

To generate the TypeScript files from the openapi.json file, make sure the python server is running, then run:

pnpm openapi

Backend

Start with cd server.

Run in development mode

docker compose up -d redis

# on the first run, or if the schemas changed
uv run alembic upgrade head

# start the worker
uv run celery -A reflector.worker.app worker --loglevel=info

# start the app
uv run -m reflector.app --reload

Then fill .env with the omitted values (ask in Zulip).

Crontab (optional)

For crontab (only healthcheck for now), start the celery beat (you don't need it on your local dev environment):

uv run celery -A reflector.worker.app beat

GPU models

Currently, reflector heavily use custom local models, deployed on modal. All the micro services are available in server/gpu/

To deploy llm changes to modal, you need:

  • a modal account
  • set up the required secret in your modal account (REFLECTOR_GPU_APIKEY)
  • install the modal cli
  • connect your modal cli to your account if not done previously
  • modal run path/to/required/llm

Using local files

You can manually process an audio file by calling the process tool:

uv run python -m reflector.tools.process path/to/audio.wav

Feature Flags

Reflector uses environment variable-based feature flags to control application functionality. These flags allow you to enable or disable features without code changes.

Available Feature Flags

Feature Flag Environment Variable
requireLogin NEXT_PUBLIC_FEATURE_REQUIRE_LOGIN
privacy NEXT_PUBLIC_FEATURE_PRIVACY
browse NEXT_PUBLIC_FEATURE_BROWSE
sendToZulip NEXT_PUBLIC_FEATURE_SEND_TO_ZULIP
rooms NEXT_PUBLIC_FEATURE_ROOMS

Setting Feature Flags

Feature flags are controlled via environment variables using the pattern NEXT_PUBLIC_FEATURE_{FEATURE_NAME} where {FEATURE_NAME} is the SCREAMING_SNAKE_CASE version of the feature name.

Examples:

# Enable user authentication requirement
NEXT_PUBLIC_FEATURE_REQUIRE_LOGIN=true

# Disable browse functionality
NEXT_PUBLIC_FEATURE_BROWSE=false

# Enable Zulip integration
NEXT_PUBLIC_FEATURE_SEND_TO_ZULIP=true

About

100% local ML models for meeting transcription and analysis

Topics

Resources

License

Stars

Watchers

Forks

Contributors 14