SARATHI (Safety Assisted Responsive Automated Technology for Highway Independance)
The SARATHI (Safety Assisted Responsive Automated Technology for Highway Independance) - Driver Monitoring System (DMS) leverages computer vision and machine learning to detect driver drowsiness in real-time using webcam video feeds. The system is constructed using several key technologies:
- OpenCV: Utilized for video capture and image processing, including converting frames to grayscale and detecting faces.
- dlib: Employed for facial landmark detection, which pinpoints specific regions of the face, such as the eyes.
- pygame: Used to play an alert sound when drowsiness is detected.
- pyttsx3: A text-to-speech library that verbally alerts the driver to wake up.
- SciPy: Assists in calculating distances between facial landmarks to determine the eye aspect ratio (EAR).
The EAR is computed by detecting the coordinates of the eyes and calculating the distances between specific points. The formula is:
where A and B are the vertical distances and C is the horizontal distance.
- The system captures video frames continuously and flips them for correct orientation.
- Faces are detected within each frame using a combination of Haar cascades and dlib's face detector.
- Once faces are located, the eye aspect ratios for both eyes are calculated.
- If the EAR falls below a pre-defined threshold for a specified number of consecutive frames, the system triggers an audio alert via pygame and a verbal warning using pyttsx3.
- Initialization: Libraries and the webcam are initialized, and configurations like speech rate for TTS are set.
- Main Loop: The system reads frames, processes them to detect faces and eyes, and computes the EAR.
- Alert Mechanism: If the EAR is below the threshold long enough, an audio alert and a verbal warning are triggered.
- Clean-Up: Releases the video capture and closes all OpenCV windows when the program ends.
NOTE: Download the shape_predictor_68_face_landmarks.dat file from the following link: https://github.com/italojs/facial-landmarks-recognition/blob/master/shape_predictor_68_face_landmarks.dat
The Driver Monitoring System is designed to help drivers stay awake and alert while on the road. Here's how it works:
The system uses a webcam to watch the driver’s face in real-time. It looks for signs that the driver might be getting sleepy.
- Seeing Your Eyes: It can tell when your eyes are open or closed by measuring how open your eyes are.
- Warning You: If your eyes are closed for too long, indicating you might be falling asleep, it plays an alert sound and says "Wake up!" to grab your attention.
- Video and Sound: It combines video technology to watch your face and sound technology to alert you.
- Continuous Monitoring: It keeps checking your face and eyes as you drive to make sure you’re alert.
The system not only plays a sound to wake you up if you’re drowsy but also uses a voice alert saying "Wake up!" to ensure you don’t miss the warning.
This system is built using various technologies like computer vision for face and eye detection, sound alerts to wake you up, and a text-to-speech engine to provide voice warnings. It’s all about keeping drivers safe by ensuring they stay awake and alert while driving.
- Test Image Eye:
- Test Webcam Eye:
- Eye and Face Detect:
- Final Demo clip (click this image to download the demo video):




