Skip to content

venkateswarisudalai/MeetBetter

Repository files navigation

MeetBetter

A powerful, privacy-focused desktop application for real-time meeting transcription with AI-powered summaries, dual audio capture, and calendar integration.

Platform License Rust TypeScript Tauri

Want to test this app? See TESTING.md for a quick 5-minute setup guide!

Overview

MeetBetter is a production-ready desktop application built with Tauri 2.0, Rust, and React that transforms how meetings are transcribed and managed. It features real-time speech-to-text with sub-2-second latency, intelligent speaker separation through dual-channel audio processing, and calendar-driven automation.

Key Innovation: Dual audio capture technology that differentiates between your microphone and system audio in real-time, solving the common problem of "who said what" in virtual meetings.

Tech Stack: Rust (backend), React + TypeScript (frontend), Tauri 2.0 (framework), Deepgram API (transcription), Groq API (AI), SQLite (storage), WebSockets (real-time streaming)

Features

  • Real-time Transcription - Live speech-to-text using Deepgram (1-2 second latency)
  • Dual Audio Capture - Separate transcription for "You" (microphone) vs "Participant" (system audio/remote speakers)
    • Uses BlackHole virtual audio device for multichannel routing
    • Prevents duplicate transcriptions with intelligent deduplication
  • Calendar Integration - Auto-start transcription when meetings begin (Google Calendar OAuth)
  • Meeting Detection - Automatically detects Zoom, Teams, Google Meet, Webex, Slack processes
  • AI-Powered Summaries - Generate meeting summaries with key points, action items, and decisions
  • Smart Reply Suggestions - Get contextual reply suggestions based on the conversation
  • Meeting Management - Save, search, and review past meetings with full transcripts
  • Privacy First - Your audio stays on your device, only transcription text is sent to APIs
  • Beautiful UI - Modern, responsive interface with dark mode support
  • Cross-Platform - Works on macOS, Windows, and Linux

Screenshots

Light Mode Dark Mode

Quick Start

Prerequisites

Installation

# Clone the repository
git clone https://github.com/venkateswarisudalai/MeetBetter.git
cd MeetBetter

# Install dependencies
npm install

# Run in development mode
npm run tauri dev

# Build for production
npm run tauri build

Complete Setup Guide

Step 1: Install System Dependencies

macOS:

# Install Homebrew (if not already installed)
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

# Install Node.js
brew install node

# Install Rust
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source $HOME/.cargo/env

# Optional: Install BlackHole for dual audio capture
brew install blackhole-2ch

Windows:

# Install Node.js from https://nodejs.org/

# Install Rust
# Download and run: https://win.rustup.rs/

# Optional: Install VB-Cable for dual audio capture
# Download from: https://vb-audio.com/Cable/

Linux:

# Install Node.js (Ubuntu/Debian)
sudo apt update
sudo apt install nodejs npm

# Install Rust
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source $HOME/.cargo/env

# Install build dependencies
sudo apt install libwebkit2gtk-4.0-dev \
    build-essential \
    curl \
    wget \
    file \
    libssl-dev \
    libgtk-3-dev \
    libayatana-appindicator3-dev \
    librsvg2-dev

Step 2: Clone and Build

# Clone the repository
git clone https://github.com/venkateswarisudalai/MeetBetter.git
cd MeetBetter

# Install JavaScript dependencies
npm install

# Build and run in development mode
npm run tauri dev

The app should launch automatically! 🚀

Step 3: Configure API Keys

You only need 2 free API keys to get started. Calendar integration and cloud sync are built in — no extra configuration needed.

  1. Get API Keys (both have free tiers):

  2. Add Keys to App:

    • Open MeetBetter app — the welcome screen guides you through both steps
    • Paste your Deepgram API key
    • Paste your Groq API key
    • Click Save

Step 4: Set Up Dual Audio (Optional)

Why do this? Separates "You" (microphone) from "Participant" (system audio/remote speakers) in transcriptions.

Option A: BlackHole Only (Testing - No Audio Playback)

# macOS - Install BlackHole
brew install blackhole-2ch

# Set audio output
# System Settings → Sound → Output → Select "BlackHole 2ch"

⚠️ Note: You won't hear audio with this setup, but channel separation will work perfectly for testing.

Option B: Multi-Output Device (Recommended - Hear Audio)

  1. Install BlackHole (if not already):

    brew install blackhole-2ch
  2. Create Multi-Output Device:

    • Open Audio MIDI Setup app (in /Applications/Utilities/)
    • Click the "+" button at bottom left
    • Select "Create Multi-Output Device"
    • In the right panel, check both:
      • BlackHole 2ch
      • MacBook Pro Speakers (or your output device)
    • Optional: Right-click the Multi-Output Device → "Use This Device For Sound Output"
  3. Set System Output:

    • Open System SettingsSoundOutput
    • Select "Multi-Output Device"
  4. Adjust Volume:

    • Keep speaker volume low to medium (prevents microphone from picking up speaker audio)
    • For best results during real meetings, use headphones instead
  5. Test It:

    # Run the included test script
    ./switch-audio.sh
    
    # Or manually test
    say "This is participant audio" &
    # Then speak into your mic
  6. Verify in MeetBetter:

    • Start Live Transcription
    • Play a video → should show "Participant:"
    • Speak into mic → should show "You:"

Step 5: Set Up Calendar Auto-Start (Optional)

Why do this? Automatically start transcription when your meetings begin. Calendar integration is built in — just click connect.

  1. Connect Google Calendar:

    • Open MeetBetter → Settings
    • Click "Connect Calendar" — your browser opens for Google sign-in
    • Grant calendar permissions and you'll be redirected back
  2. Enable Auto-Start:

    • Toggle "Auto-start on meeting time" to ON
    • Start buffer time: How many minutes before meeting to start (default: 2 minutes)
    • Detect meeting apps: Auto-detect Zoom, Teams, Google Meet, etc. (recommended: ON)
  3. Test It:

    • Create a test meeting in Google Calendar (5 minutes from now)
    • Open Zoom/Teams/Meet app
    • MeetBetter should show "Meeting starting in X minutes"
    • Transcription should auto-start when buffer time is reached

Step 6: Grant Permissions (macOS)

When you first run the app, macOS will ask for permissions:

  1. Microphone Access: Click "OK" to allow

    • Required for transcription
    • Can manage later in: System Settings → Privacy & Security → Microphone
  2. Accessibility (if using calendar auto-start):

    • System Settings → Privacy & Security → Accessibility
    • Add MeetBetter and toggle ON

Troubleshooting Setup

Build fails with "xcrun: error" (macOS):

xcode-select --install

Rust not found:

source $HOME/.cargo/env
# Or restart your terminal

Node version too old:

# macOS
brew upgrade node

# Or use nvm
nvm install 18
nvm use 18

Can't hear audio with Multi-Output:

  • Verify both devices are checked in Audio MIDI Setup
  • Check System Settings → Sound → Output shows "Multi-Output Device"
  • Increase speaker volume slightly

Dual audio not working:

# Verify BlackHole is installed
ls /Library/Audio/Plug-Ins/HAL/BlackHole2ch.driver

# If missing, reinstall
brew reinstall blackhole-2ch

# Restart Mac after installation
sudo reboot

API Setup

You only need 2 free API keys to get started. Calendar and cloud sync are built in.

Service Purpose Get Key Free Tier
Deepgram Real-time transcription console.deepgram.com $200 credit
Groq AI summaries & replies console.groq.com/keys Free tier

Setting Up Keys

  1. Open the app — the welcome screen guides you
  2. Get your Deepgram key (includes $200 free credit)
  3. Get your Groq key (free tier)
  4. Paste both in Settings → Start transcribing!

Usage

Live Transcription

  1. Click "Start Live Transcription"
  2. Speak into your microphone
  3. Watch real-time transcription appear
  4. Click "Stop" when done

Dual Audio Capture (Optional)

What it does: Separates "You" (your microphone) from "Participant" (system audio/remote speakers) in transcriptions.

macOS Setup:

  1. Install BlackHole 2ch:

    brew install blackhole-2ch

    Or download from: https://github.com/ExistentialAudio/BlackHole

  2. For Testing (No Audio Playback):

    • System Settings → Sound → Output
    • Select "BlackHole 2ch"
    • ⚠️ You won't hear audio, but channel separation will work perfectly
  3. For Actual Use (Hear Audio While Recording):

    • Open Audio MIDI Setup app
    • Click "+""Create Multi-Output Device"
    • Check both:
      • ✓ BlackHole 2ch
      • ✓ MacBook Pro Speakers (or your preferred output)
    • System Settings → Sound → Output → Select "Multi-Output Device"
    • 💡 Keep speaker volume low to prevent feedback

Windows/Linux:

  • Windows: Install VB-Cable (similar setup)
  • Linux: Use PulseAudio loopback

Without BlackHole:

✅ App works normally, but all audio shows as "You"

Calendar Auto-Start

  1. Open SettingsMeeting Auto-Start
  2. Enable "Auto-start on meeting time"
  3. Click "Connect Calendar" → Sign in with Google
  4. Set start buffer time (default: 2 minutes before meeting)
  5. App will automatically start transcribing when meetings begin!

Generate Summary

  1. After transcription, click "Generate" in the Summary panel
  2. AI will create a concise meeting summary with key points and action items

Get Reply Suggestions

  1. Click "Generate from Transcript"
  2. Get smart, contextual reply suggestions
  3. Click any suggestion to copy it

Tech Stack

Layer Technology
Frontend React + TypeScript + Vite
Backend Rust + Tauri 2.0
Transcription Deepgram (real-time with multichannel), AssemblyAI (batch)
AI/LLM Groq (Llama 3.1, Mixtral)
Audio cpal (cross-platform audio capture)
Calendar Google Calendar OAuth2 integration
Virtual Audio BlackHole 2ch (macOS), VB-Cable (Windows)
Styling CSS with dark mode support

Project Structure

meetbetter/
├── src/                       # React frontend
│   ├── App.tsx               # Main React component
│   └── App.css               # Styles
├── src-tauri/                # Rust backend
│   ├── src/
│   │   ├── lib.rs            # Tauri commands & state
│   │   ├── deepgram.rs       # Real-time multichannel transcription
│   │   ├── system_audio.rs   # BlackHole audio device detection
│   │   ├── meeting_monitor.rs # Calendar polling & meeting detection
│   │   ├── calendar.rs       # Google Calendar OAuth integration
│   │   ├── assemblyai.rs     # Batch transcription
│   │   ├── database.rs       # SQLite meeting storage
│   │   └── audio.rs          # Audio recording
│   └── Cargo.toml            # Rust dependencies
├── switch-audio.sh           # Helper script for audio routing
├── package.json              # Node dependencies
└── README.md

Contributing

Contributions are welcome! Here's how you can help:

Ways to Contribute

  • Report bugs
  • Suggest features
  • Submit pull requests
  • Improve documentation
  • Share the project

Development Setup

# Install Rust
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

# Clone and setup
git clone https://github.com/YOUR_USERNAME/meeting-assistant.git
cd meeting-assistant
npm install

# Run development server
npm run tauri dev

Pull Request Process

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

Roadmap

  • Dual audio capture (You vs Participant)
  • Calendar integration (Google Calendar)
  • Meeting auto-start detection
  • Outlook calendar support
  • Speaker diarization (identify multiple participants)
  • Export to various formats (PDF, Word, Markdown)
  • Meeting templates
  • Keyboard shortcuts
  • Local LLM support (Ollama)
  • Browser extension
  • Mobile companion app
  • Multi-language support
  • Windows/Linux dual audio support

FAQ

Q: Is my audio data stored anywhere? A: No. Audio is processed in real-time and only the transcription text is sent to APIs. Nothing is stored on external servers.

Q: Can I use this without internet? A: Recording works offline, but transcription and AI features require internet connection.

Q: Which API should I get first? A: Start with Deepgram (for transcription) + Groq (for AI). Both have generous free tiers.

Q: Do I need BlackHole for the app to work? A: No! The app works perfectly without BlackHole. BlackHole is only needed if you want to differentiate between "You" (microphone) and "Participant" (system audio/remote speakers) in transcriptions.

Q: Why does everything show as "You" in my transcription? A: This means BlackHole isn't installed or your audio output isn't set to BlackHole/Multi-Output Device. See the Dual Audio Capture section for setup instructions.

Q: Can I hear audio while using dual channel capture? A: Yes! Create a Multi-Output Device in Audio MIDI Setup that includes both BlackHole and your speakers. See the detailed setup instructions in the Usage section.

Q: Does calendar auto-start work with Zoom/Teams? A: Yes! The app detects when Zoom, Teams, Google Meet, Webex, or Slack processes are running and can auto-start transcription based on your calendar events.

Q: Will dual audio capture work on Windows/Linux? A: Currently, dual audio is macOS-only with BlackHole. Windows users can use VB-Cable with similar setup. Linux support is planned for future releases.

Troubleshooting

Dual Audio Issues

Problem: Everything shows as "You", no "Participant" label

  • ✅ Ensure BlackHole 2ch is installed: brew install blackhole-2ch
  • ✅ Set System Settings → Sound → Output to "BlackHole 2ch" or "Multi-Output Device"
  • ✅ Restart the app after changing audio settings

Problem: Transcriptions are repeating multiple times

  • ❌ Your audio output is set to speakers, not BlackHole
  • ❌ If using Multi-Output Device, speaker volume is too high (mic picks up echo)
  • ✅ Switch to BlackHole-only for testing, or lower speaker volume significantly

Problem: I can't hear any audio

  • This is expected if using BlackHole 2ch only
  • ✅ Create a Multi-Output Device (see Usage section)
  • ✅ Include both BlackHole 2ch and your speakers in the Multi-Output Device

Calendar Auto-Start Issues

Problem: Auto-start not triggering

  • ✅ Check Settings → Enable "Auto-start on meeting time"
  • ✅ Ensure Google Calendar is connected
  • ✅ Verify meeting app (Zoom, Teams, etc.) is running
  • ✅ Check start buffer time setting (default: 2 minutes before meeting)

Problem: "Not authenticated with Google" error

  • ✅ Click "Connect Calendar" in settings
  • ✅ Complete Google OAuth flow
  • ✅ Grant calendar read permissions

General Issues

Problem: Build fails on macOS

# Update Xcode Command Line Tools
xcode-select --install

# Update Rust
rustup update stable

Problem: Microphone not detected

  • ✅ Grant microphone permissions: System Settings → Privacy & Security → Microphone
  • ✅ Restart the app

Problem: Deepgram connection fails

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

Support


Made with love using Tauri + React + Rust

About

No description, website, or topics provided.

Resources

License

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors