이현제, 이주형, 이지수, 장예진, 정하연
System for Hard of Hearing
KESSIA Chairman's Awarded
The system helps hearing-impaired people accurately and quickly recognize dangerous situations they may encounter in their daily lives (such as car horns, sirens, and gas explosions).
It receives sounds through an omnidirectional microphone array, classifies them, and transmits the direction of the sound to a display and bone conduction speaker (via vibration).
- Real-time Audio Classification: With YAMNet using TensorFlow Lite, classifies 47 sounds in real-time.
- Sound Source Localization (SSL): Localize direction of incoming sound via 4-Mic-Array and ODAS
- Display: Visualize the position and type of sound using
- Bone conduction speaker: Vibrate to alert user in 3 levels of danger.
- On-device & Portable: Portable system using Raspberry-pi and power bank while all pipelines are processed on-device.
- Raspberry Pi OS (Linux)
- Python 3.7.3
- ML Framework: TensorFlow Lite
Youtube Demo ==> ytlink
.
├── Docs/ # Project documentation (Proposals, Final report)
├── deprecated/ # Unused or legacy files
├── img/ # Predicted class assets
├── odas/ # External: ODAS repository (Vendored)
│
├── HappyNewEar.sh # Project executable shell script
├── main_plt.py # Main entry point (using Matplotlib for plotting)
├── main_pygame.py # Main entry point (using Pygame for plotting)
├── Classification.py # Audio classification core module
│
├── yamnet.tflite # YAMNet model weights
├── yamnet_class_map_*.csv # Class label mapping for YAMNet predictions
├── odas.cfg # ODAS Configuration files
│
├── log.txt # Runtime logs
└── README.md
ODAS repo. ==> odasLink