A powerful, privacy-focused desktop application for real-time meeting transcription with AI-powered summaries, dual audio capture, and calendar integration.
Want to test this app? See TESTING.md for a quick 5-minute setup guide!
MeetBetter is a production-ready desktop application built with Tauri 2.0, Rust, and React that transforms how meetings are transcribed and managed. It features real-time speech-to-text with sub-2-second latency, intelligent speaker separation through dual-channel audio processing, and calendar-driven automation.
Key Innovation: Dual audio capture technology that differentiates between your microphone and system audio in real-time, solving the common problem of "who said what" in virtual meetings.
Tech Stack: Rust (backend), React + TypeScript (frontend), Tauri 2.0 (framework), Deepgram API (transcription), Groq API (AI), SQLite (storage), WebSockets (real-time streaming)
- Real-time Transcription - Live speech-to-text using Deepgram (1-2 second latency)
- Dual Audio Capture - Separate transcription for "You" (microphone) vs "Participant" (system audio/remote speakers)
- Uses BlackHole virtual audio device for multichannel routing
- Prevents duplicate transcriptions with intelligent deduplication
- Calendar Integration - Auto-start transcription when meetings begin (Google Calendar OAuth)
- Meeting Detection - Automatically detects Zoom, Teams, Google Meet, Webex, Slack processes
- AI-Powered Summaries - Generate meeting summaries with key points, action items, and decisions
- Smart Reply Suggestions - Get contextual reply suggestions based on the conversation
- Meeting Management - Save, search, and review past meetings with full transcripts
- Privacy First - Your audio stays on your device, only transcription text is sent to APIs
- Beautiful UI - Modern, responsive interface with dark mode support
- Cross-Platform - Works on macOS, Windows, and Linux
- Node.js (v18 or higher)
- Rust (latest stable)
- API Keys (see API Setup)
- [Optional but Recommended] BlackHole 2ch for dual audio capture (macOS only)
# Clone the repository
git clone https://github.com/venkateswarisudalai/MeetBetter.git
cd MeetBetter
# Install dependencies
npm install
# Run in development mode
npm run tauri dev
# Build for production
npm run tauri build# Install Homebrew (if not already installed)
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
# Install Node.js
brew install node
# Install Rust
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source $HOME/.cargo/env
# Optional: Install BlackHole for dual audio capture
brew install blackhole-2ch# Install Node.js from https://nodejs.org/
# Install Rust
# Download and run: https://win.rustup.rs/
# Optional: Install VB-Cable for dual audio capture
# Download from: https://vb-audio.com/Cable/# Install Node.js (Ubuntu/Debian)
sudo apt update
sudo apt install nodejs npm
# Install Rust
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source $HOME/.cargo/env
# Install build dependencies
sudo apt install libwebkit2gtk-4.0-dev \
build-essential \
curl \
wget \
file \
libssl-dev \
libgtk-3-dev \
libayatana-appindicator3-dev \
librsvg2-dev# Clone the repository
git clone https://github.com/venkateswarisudalai/MeetBetter.git
cd MeetBetter
# Install JavaScript dependencies
npm install
# Build and run in development mode
npm run tauri devThe app should launch automatically! 🚀
You only need 2 free API keys to get started. Calendar integration and cloud sync are built in — no extra configuration needed.
-
Get API Keys (both have free tiers):
- Deepgram: Sign up at https://console.deepgram.com — includes $200 free credit
- Groq: Sign up at https://console.groq.com — free tier
-
Add Keys to App:
- Open MeetBetter app — the welcome screen guides you through both steps
- Paste your Deepgram API key
- Paste your Groq API key
- Click Save
Why do this? Separates "You" (microphone) from "Participant" (system audio/remote speakers) in transcriptions.
# macOS - Install BlackHole
brew install blackhole-2ch
# Set audio output
# System Settings → Sound → Output → Select "BlackHole 2ch"-
Install BlackHole (if not already):
brew install blackhole-2ch
-
Create Multi-Output Device:
- Open Audio MIDI Setup app (in /Applications/Utilities/)
- Click the "+" button at bottom left
- Select "Create Multi-Output Device"
- In the right panel, check both:
- ✓ BlackHole 2ch
- ✓ MacBook Pro Speakers (or your output device)
- Optional: Right-click the Multi-Output Device → "Use This Device For Sound Output"
-
Set System Output:
- Open System Settings → Sound → Output
- Select "Multi-Output Device"
-
Adjust Volume:
- Keep speaker volume low to medium (prevents microphone from picking up speaker audio)
- For best results during real meetings, use headphones instead
-
Test It:
# Run the included test script ./switch-audio.sh # Or manually test say "This is participant audio" & # Then speak into your mic
-
Verify in MeetBetter:
- Start Live Transcription
- Play a video → should show "Participant:"
- Speak into mic → should show "You:"
Why do this? Automatically start transcription when your meetings begin. Calendar integration is built in — just click connect.
-
Connect Google Calendar:
- Open MeetBetter → Settings
- Click "Connect Calendar" — your browser opens for Google sign-in
- Grant calendar permissions and you'll be redirected back
-
Enable Auto-Start:
- Toggle "Auto-start on meeting time" to ON
- Start buffer time: How many minutes before meeting to start (default: 2 minutes)
- Detect meeting apps: Auto-detect Zoom, Teams, Google Meet, etc. (recommended: ON)
-
Test It:
- Create a test meeting in Google Calendar (5 minutes from now)
- Open Zoom/Teams/Meet app
- MeetBetter should show "Meeting starting in X minutes"
- Transcription should auto-start when buffer time is reached
When you first run the app, macOS will ask for permissions:
-
Microphone Access: Click "OK" to allow
- Required for transcription
- Can manage later in: System Settings → Privacy & Security → Microphone
-
Accessibility (if using calendar auto-start):
- System Settings → Privacy & Security → Accessibility
- Add MeetBetter and toggle ON
Build fails with "xcrun: error" (macOS):
xcode-select --installRust not found:
source $HOME/.cargo/env
# Or restart your terminalNode version too old:
# macOS
brew upgrade node
# Or use nvm
nvm install 18
nvm use 18Can't hear audio with Multi-Output:
- Verify both devices are checked in Audio MIDI Setup
- Check System Settings → Sound → Output shows "Multi-Output Device"
- Increase speaker volume slightly
Dual audio not working:
# Verify BlackHole is installed
ls /Library/Audio/Plug-Ins/HAL/BlackHole2ch.driver
# If missing, reinstall
brew reinstall blackhole-2ch
# Restart Mac after installation
sudo rebootYou only need 2 free API keys to get started. Calendar and cloud sync are built in.
| Service | Purpose | Get Key | Free Tier |
|---|---|---|---|
| Deepgram | Real-time transcription | console.deepgram.com | $200 credit |
| Groq | AI summaries & replies | console.groq.com/keys | Free tier |
- Open the app — the welcome screen guides you
- Get your Deepgram key (includes $200 free credit)
- Get your Groq key (free tier)
- Paste both in Settings → Start transcribing!
- Click "Start Live Transcription"
- Speak into your microphone
- Watch real-time transcription appear
- Click "Stop" when done
What it does: Separates "You" (your microphone) from "Participant" (system audio/remote speakers) in transcriptions.
-
Install BlackHole 2ch:
brew install blackhole-2ch
Or download from: https://github.com/ExistentialAudio/BlackHole
-
For Testing (No Audio Playback):
- System Settings → Sound → Output
- Select "BlackHole 2ch"
⚠️ You won't hear audio, but channel separation will work perfectly
-
For Actual Use (Hear Audio While Recording):
- Open Audio MIDI Setup app
- Click "+" → "Create Multi-Output Device"
- Check both:
- ✓ BlackHole 2ch
- ✓ MacBook Pro Speakers (or your preferred output)
- System Settings → Sound → Output → Select "Multi-Output Device"
- 💡 Keep speaker volume low to prevent feedback
- Windows: Install VB-Cable (similar setup)
- Linux: Use PulseAudio loopback
✅ App works normally, but all audio shows as "You"
- Open Settings → Meeting Auto-Start
- Enable "Auto-start on meeting time"
- Click "Connect Calendar" → Sign in with Google
- Set start buffer time (default: 2 minutes before meeting)
- App will automatically start transcribing when meetings begin!
- After transcription, click "Generate" in the Summary panel
- AI will create a concise meeting summary with key points and action items
- Click "Generate from Transcript"
- Get smart, contextual reply suggestions
- Click any suggestion to copy it
| Layer | Technology |
|---|---|
| Frontend | React + TypeScript + Vite |
| Backend | Rust + Tauri 2.0 |
| Transcription | Deepgram (real-time with multichannel), AssemblyAI (batch) |
| AI/LLM | Groq (Llama 3.1, Mixtral) |
| Audio | cpal (cross-platform audio capture) |
| Calendar | Google Calendar OAuth2 integration |
| Virtual Audio | BlackHole 2ch (macOS), VB-Cable (Windows) |
| Styling | CSS with dark mode support |
meetbetter/
├── src/ # React frontend
│ ├── App.tsx # Main React component
│ └── App.css # Styles
├── src-tauri/ # Rust backend
│ ├── src/
│ │ ├── lib.rs # Tauri commands & state
│ │ ├── deepgram.rs # Real-time multichannel transcription
│ │ ├── system_audio.rs # BlackHole audio device detection
│ │ ├── meeting_monitor.rs # Calendar polling & meeting detection
│ │ ├── calendar.rs # Google Calendar OAuth integration
│ │ ├── assemblyai.rs # Batch transcription
│ │ ├── database.rs # SQLite meeting storage
│ │ └── audio.rs # Audio recording
│ └── Cargo.toml # Rust dependencies
├── switch-audio.sh # Helper script for audio routing
├── package.json # Node dependencies
└── README.md
Contributions are welcome! Here's how you can help:
- Report bugs
- Suggest features
- Submit pull requests
- Improve documentation
- Share the project
# Install Rust
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# Clone and setup
git clone https://github.com/YOUR_USERNAME/meeting-assistant.git
cd meeting-assistant
npm install
# Run development server
npm run tauri dev- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
- Dual audio capture (You vs Participant)
- Calendar integration (Google Calendar)
- Meeting auto-start detection
- Outlook calendar support
- Speaker diarization (identify multiple participants)
- Export to various formats (PDF, Word, Markdown)
- Meeting templates
- Keyboard shortcuts
- Local LLM support (Ollama)
- Browser extension
- Mobile companion app
- Multi-language support
- Windows/Linux dual audio support
Q: Is my audio data stored anywhere? A: No. Audio is processed in real-time and only the transcription text is sent to APIs. Nothing is stored on external servers.
Q: Can I use this without internet? A: Recording works offline, but transcription and AI features require internet connection.
Q: Which API should I get first? A: Start with Deepgram (for transcription) + Groq (for AI). Both have generous free tiers.
Q: Do I need BlackHole for the app to work? A: No! The app works perfectly without BlackHole. BlackHole is only needed if you want to differentiate between "You" (microphone) and "Participant" (system audio/remote speakers) in transcriptions.
Q: Why does everything show as "You" in my transcription? A: This means BlackHole isn't installed or your audio output isn't set to BlackHole/Multi-Output Device. See the Dual Audio Capture section for setup instructions.
Q: Can I hear audio while using dual channel capture? A: Yes! Create a Multi-Output Device in Audio MIDI Setup that includes both BlackHole and your speakers. See the detailed setup instructions in the Usage section.
Q: Does calendar auto-start work with Zoom/Teams? A: Yes! The app detects when Zoom, Teams, Google Meet, Webex, or Slack processes are running and can auto-start transcription based on your calendar events.
Q: Will dual audio capture work on Windows/Linux? A: Currently, dual audio is macOS-only with BlackHole. Windows users can use VB-Cable with similar setup. Linux support is planned for future releases.
Problem: Everything shows as "You", no "Participant" label
- ✅ Ensure BlackHole 2ch is installed:
brew install blackhole-2ch - ✅ Set System Settings → Sound → Output to "BlackHole 2ch" or "Multi-Output Device"
- ✅ Restart the app after changing audio settings
Problem: Transcriptions are repeating multiple times
- ❌ Your audio output is set to speakers, not BlackHole
- ❌ If using Multi-Output Device, speaker volume is too high (mic picks up echo)
- ✅ Switch to BlackHole-only for testing, or lower speaker volume significantly
Problem: I can't hear any audio
- This is expected if using BlackHole 2ch only
- ✅ Create a Multi-Output Device (see Usage section)
- ✅ Include both BlackHole 2ch and your speakers in the Multi-Output Device
Problem: Auto-start not triggering
- ✅ Check Settings → Enable "Auto-start on meeting time"
- ✅ Ensure Google Calendar is connected
- ✅ Verify meeting app (Zoom, Teams, etc.) is running
- ✅ Check start buffer time setting (default: 2 minutes before meeting)
Problem: "Not authenticated with Google" error
- ✅ Click "Connect Calendar" in settings
- ✅ Complete Google OAuth flow
- ✅ Grant calendar read permissions
Problem: Build fails on macOS
# Update Xcode Command Line Tools
xcode-select --install
# Update Rust
rustup update stableProblem: Microphone not detected
- ✅ Grant microphone permissions: System Settings → Privacy & Security → Microphone
- ✅ Restart the app
Problem: Deepgram connection fails
- ✅ Check your API key in Settings
- ✅ Verify internet connection
- ✅ Check Deepgram API status: https://status.deepgram.com
This project is licensed under the MIT License - see the LICENSE file for details.
- Tauri - Desktop framework
- Deepgram - Real-time transcription
- Groq - Fast LLM inference
- AssemblyAI - Batch transcription
- Star this repo if you find it useful!
- Report bugs
- Request features
Made with love using Tauri + React + Rust

