Skip to content

Commit 4647177

Browse files
minor updates
1 parent cc632b2 commit 4647177

File tree

9 files changed

+129
-139
lines changed

9 files changed

+129
-139
lines changed

Playground/README.md

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,12 +4,34 @@ Interactive demo projects showcasing what you can build with RunAnywhere.
44

55
| Project | Description | Platform |
66
|---------|-------------|----------|
7+
| [YapRun](YapRun/) | On-device voice dictation — custom keyboard, multiple Whisper backends, Live Activity, offline-ready — [Website](https://runanywhere.ai/yaprun) · [TestFlight](https://testflight.apple.com/join/6N7nBeG8) | iOS & macOS (Swift/SwiftUI) |
78
| [swift-starter-app](swift-starter-app/) | Privacy-first AI demo — LLM Chat, Speech-to-Text, Text-to-Speech, and Voice Pipeline with VAD | iOS (Swift/SwiftUI) |
89
| [on-device-browser-agent](on-device-browser-agent/) | On-device AI browser automation using WebLLM — no cloud, no API keys, fully private | Chrome Extension (TypeScript/React) |
910
| [android-use-agent](android-use-agent/) | Fully on-device autonomous Android agent — navigates phone UI via accessibility + on-device LLM (Qwen3-4B). See [benchmarks](android-use-agent/ASSESSMENT.md) | Android (Kotlin/Jetpack Compose) |
1011
| [linux-voice-assistant](linux-voice-assistant/) | Fully on-device voice assistant — Wake Word, VAD, STT, LLM, and TTS with zero cloud dependency | Linux (C++/ALSA) |
1112
| [openclaw-hybrid-assistant](openclaw-hybrid-assistant/) | Hybrid voice assistant — on-device Wake Word, VAD, STT, and TTS with cloud LLM via OpenClaw WebSocket | Linux (C++/ALSA) |
1213

14+
## YapRun
15+
16+
On-device voice dictation for iOS and macOS. All speech recognition runs locally — your voice never leaves your device.
17+
18+
<p align="center">
19+
<img src="YapRun/screenshots/01_welcome.png" width="160" />
20+
<img src="YapRun/screenshots/03_home.png" width="160" />
21+
<img src="YapRun/screenshots/04_keyboard.png" width="160" />
22+
<img src="YapRun/screenshots/05_playground.png" width="160" />
23+
<img src="YapRun/screenshots/06_notepad.png" width="160" />
24+
</p>
25+
26+
- **Custom Keyboard** — Tap "Yap" from any text field in any app to dictate
27+
- **Multiple Whisper Backends** — WhisperKit (Neural Engine) and ONNX (CPU) with one-tap model switching
28+
- **Live Activity** — Real-time transcription status on the Lock Screen and Dynamic Island
29+
- **ASR Playground** — Record and transcribe in-app to test speed and accuracy
30+
- **macOS Agent** — Menu bar icon, global hotkey dictation, floating flow bar
31+
- **Offline-Ready** — Download once, run without a network connection
32+
33+
**[runanywhere.ai/yaprun](https://runanywhere.ai/yaprun)** | [TestFlight Beta](https://testflight.apple.com/join/6N7nBeG8) | Free on the App Store — iOS 16.0+ / macOS 14.0+, Xcode 15.0+
34+
1335
## linux-voice-assistant
1436

1537
A complete on-device voice AI pipeline for Linux (Raspberry Pi 5, x86_64, ARM64). All inference runs locally — no cloud, no API keys:

Playground/YapRun/README.md

Lines changed: 107 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,107 @@
1+
# YapRun
2+
3+
On-device voice dictation for iOS and macOS, powered by the RunAnywhere SDK. All speech recognition runs locally — your voice never leaves your device.
4+
5+
**[runanywhere.ai/yaprun](https://runanywhere.ai/yaprun)** | [TestFlight Beta](https://testflight.apple.com/join/6N7nBeG8) | Free on the App Store — no account required
6+
7+
<p align="center">
8+
<img src="screenshots/01_welcome.png" width="200" />
9+
<img src="screenshots/03_home.png" width="200" />
10+
<img src="screenshots/04_keyboard.png" width="200" />
11+
<img src="screenshots/05_playground.png" width="200" />
12+
</p>
13+
14+
## Features
15+
16+
### iOS
17+
18+
- **Custom Keyboard Extension** — Tap "Yap" from any text field in any app to dictate with on-device Whisper
19+
- **Live Activity** — Real-time transcription status on the Lock Screen and Dynamic Island
20+
- **Model Hub** — Download and switch between multiple ASR models (WhisperKit Neural Engine, ONNX CPU)
21+
- **ASR Playground** — Record and transcribe in-app to test speed and accuracy
22+
- **Notepad** — Built-in scratchpad for quick voice drafts
23+
- **Guided Onboarding** — Step-by-step setup for microphone, keyboard, and model download
24+
- **Deep Links**`yaprun://startFlow`, `yaprun://playground`, `yaprun://kill` for keyboard ↔ app communication
25+
26+
### macOS
27+
28+
- **Menu Bar Agent** — Runs as a background agent with a persistent menu bar icon
29+
- **Global Hotkey** — System-wide keyboard shortcut to dictate and insert text at the cursor
30+
- **Flow Bar** — Floating overlay showing dictation status
31+
- **Hub Window** — Model management, playground, notepad, and settings in a single window
32+
33+
### Shared (iOS + macOS)
34+
35+
- **Multiple ASR Backends** — WhisperKit (Apple Neural Engine via Core ML) and ONNX (CPU via sherpa-onnx)
36+
- **Model Registry** — Curated models with consumer-friendly names: Fast (70 MB), Accurate (134 MB), Compact CPU (118 MB), Whisper CPU (75 MB)
37+
- **Offline-Ready** — Download once during setup, run without a network connection
38+
- **Dictation History** — Recent transcriptions stored locally with timestamps
39+
40+
## Architecture
41+
42+
```
43+
YapRun/
44+
├── YapRunApp.swift # App entry point (iOS WindowGroup + macOS agent)
45+
├── ContentView.swift # iOS home screen (status cards, model hub, history)
46+
├── Core/
47+
│ ├── AppColors.swift # Design tokens (dark theme, orange CTA)
48+
│ ├── AppTypes.swift # Shared enums and type aliases
49+
│ ├── ModelRegistry.swift # ASR model definitions and SDK registration
50+
│ ├── ClipboardService.swift # Cross-platform pasteboard access
51+
│ └── DictationHistory.swift # Local history persistence
52+
├── Features/
53+
│ ├── Home/ # Model cards, download progress, home VM
54+
│ ├── Playground/ # Record → transcribe test bench
55+
│ ├── Notepad/ # Voice-first text editor
56+
│ ├── Onboarding/ # Multi-step guided setup (mic, keyboard, model)
57+
│ └── VoiceKeyboard/ # Flow session manager, Live Activity, deep links
58+
├── Shared/
59+
│ ├── SharedConstants.swift # App group keys, Darwin notification names, URL scheme
60+
│ └── SharedDataBridge.swift # App ↔ keyboard extension shared state via UserDefaults suite
61+
├── macOS/
62+
│ ├── MacAppDelegate.swift # Agent lifecycle, hub window, flow bar
63+
│ ├── Features/ # macOS-specific views (hub, playground, settings, onboarding)
64+
│ └── Services/ # Hotkey, text insertion, audio feedback, permissions
65+
├── YapRunKeyboard/ # iOS keyboard extension (separate target)
66+
└── YapRunActivity/ # Live Activity widget (separate target)
67+
```
68+
69+
### Key Patterns
70+
71+
- **Flow Session (WisprFlow pattern)**: The keyboard extension triggers the main app via deep link (`yaprun://startFlow`). The app starts `AVAudioEngine` while foregrounded, keeps alive via Live Activity, then receives Darwin notifications (`startListening` / `stopListening`) from the keyboard to gate audio buffering and transcription.
72+
- **Dual Runtime**: WhisperKit runs on Apple Neural Engine (Core ML) for speed; ONNX via sherpa-onnx runs on CPU as a fallback.
73+
- **Shared Data Bridge**: App and keyboard extension communicate through a shared `UserDefaults` suite and Darwin notifications — no network calls.
74+
75+
## Requirements
76+
77+
| Platform | Minimum | Recommended |
78+
|----------|---------|-------------|
79+
| iOS | 16.0 | 17.0+ |
80+
| macOS | 14.0 | 15.0+ |
81+
| Xcode | 15.0 | 16.0+ |
82+
83+
## Getting Started
84+
85+
1. Open the project in Xcode:
86+
87+
```bash
88+
cd Playground/YapRun
89+
open YapRun.xcodeproj
90+
```
91+
92+
2. Select the **YapRun** scheme and your target device/simulator.
93+
94+
3. Build and run. The onboarding flow will guide you through microphone permission, keyboard setup, and model download.
95+
96+
> **Keyboard Extension**: To use the custom keyboard on iOS, go to **Settings → General → Keyboard → Keyboards → Add New Keyboard** and select **YapRun**. Enable **Full Access** when prompted.
97+
98+
## Models
99+
100+
All models are downloaded from GitHub Releases and cached on-device:
101+
102+
| Model | Backend | Size | Best For |
103+
|-------|---------|------|----------|
104+
| Fast (whisperkit-tiny.en) | WhisperKit / Neural Engine | 70 MB | Quick notes, low battery |
105+
| Accurate (whisperkit-base.en) | WhisperKit / Neural Engine | 134 MB | Longer dictation, higher accuracy |
106+
| Compact CPU (moonshine-tiny-en-int8) | ONNX / sherpa-onnx | 118 MB | When Neural Engine is busy |
107+
| Whisper CPU (whisper-tiny.en) | ONNX / sherpa-onnx | 75 MB | Maximum device compatibility |

0 commit comments

Comments
 (0)