ποΈSpeech Summary is a Flutter application that transcribes live speech and uses OpenAI's GPT API to generate concise summaries.
Ideal for meetings, spontaneous ideas, or simplifying long speeches into focused insights.
- Features
- Tech Stack
- Screenshots
- Fake Summary Mode
- Getting Started
- Tests
- TODOs
- Contributions
- License
- Author
- π€ Live speech-to-text transcription
- π€ AI-powered summarization via OpenAI
- π Mic permission management
- π§ͺ Fake Summary mode for testing without API keys
- π§Ό Clean Architecture with modular layers
- π Built-in
.envsupport for secure API key storage - π§ Ideal for meetings, lectures, spontaneous ideas, or simplifying long speeches
- Flutter & Dart
- BLoC for state management
- Clean Architecture pattern
- Speech-to-Text via
speech_to_text - Microphone permissions via
permission_handler - HTTP networking with
Dio - Environment config via
flutter_dotenv - Dependency Injection with
get_it - Functional programming with
dartz
Below are short demos showcasing the app's core functionality and UI behavior.
Watch how the app records live speech, then generates a smart summary using OpenAI.
This mode allows you to demo the app without an API key.
Clicking Test Fake Summary will show a mock input and summary β ideal for UI tests or presentations.
This feature is also demonstrated in the demo preview below.
This project uses full CI/CD automation:
- β CI: Static code analysis and tests via GitHub Actions
- π CD: Automatic deployment of the web version to GitHub Pages after every push to
main
π Try the live web version here:
https://abdullah-khudher.github.io/speech_summary_app/
- Flutter SDK
- A physical Android or iOS device (for mic access)
git clone https://github.com/abdullah-khudher/speech_summary_app.git
cd speech_summary_appCreate a .env file in the root directory:
OPENAIAPIKEY=your_openai_api_key_here
BASEURL=https://api.openai.com/v1/β
.env is already in .gitignore.
flutter pub getflutter runπ± Note: Use a real device to test microphone functionality.
The project includes comprehensive testing to ensure app stability:
- β
Unit & Widget Tests
Located under/test/
Run using:
flutter test- β
Integration Tests
Located under
/integration_test/β these simulate full user flows. Run using:
flutter test integration_test/βΉοΈ Integration tests may require a real device or emulator, and proper setup of the integration_test package.
- π Support transcription in multiple languages
- π€ Add sharing and export options for summaries
- π΄ Implement offline speech recognition fallback
- π Add text-to-speech for generated summaries
- βοΈ Automatic segmentation of long speech input
- π¬ Show real-time transcription confidence levels
Pull requests are welcome. For major changes, please open an issue first.
This project is licensed under the MIT License. See the LICENSE file for details.
## π¨βπ» Development Notes
This project was fully designed, developed, and CI/CD-integrated by Abdullah Khudher as part of a hands-on portfolio to demonstrate expertise in clean architecture, AI integration, and production-grade Flutter tooling.Abdullah Khudher
GitHub Profile

