Help | Advanced Search

Electrical Engineering and Systems Science > Signal Processing

Title: a review of driver drowsiness detection systems: techniques, advantages and limitations.

Abstract: Driver Drowsiness is one of the most factors of road accidents, leading to severe injuries and deaths every year. Drowsiness means difficulty staying awake, which can lead to falling asleep. This paper introduces a literature review of driver drowsiness detection systems based on an analysis of physiological signals, facial features, and driving patterns. The paper also presents and details the recently proposed techniques for each class. We have also provided a comparative study of recently published works regarding the accuracy, reliability, hardware requirement, and intrusiveness. We have summarized and discussed the advantages and limitations of each class. As a result, each class of techniques has advantages and limitations. A hybrid system that combines two and more techniques will be efficient, robust, accurate, and used in real-time to take advantage of each technique.

Submission history

Access paper:.

  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

Advanced Detection of Driver Drowsiness Using CNN-LSTM Robust Approach

Ieee account.

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

Book cover

International Advanced Computing Conference

IACC 2023: Advanced Computing pp 36–46 Cite as

Analysis and Implementation of Driver Drowsiness, Distraction, and Detection System

  • Govind Singh Patel 11 ,
  • Shubhada Chandrakant Patil 11 ,
  • Akshata Adinath Patil 11 ,
  • Rutuja Pravin Dahotre 11 &
  • Tejas Jitendra Patil 11  
  • Conference paper
  • First Online: 26 March 2024

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 2053))

This work presented analysis and implementation of driver drowsiness, distraction, and detection systems using image processing techniques. The literature review based on drowsiness, distraction, and detection have been taken with their parameters in tabulation form. Flow charts of software and hardware have been presented for the proposed architecture. A comparative analysis of parameters with their percentage of accuracy is given in the table. Therefore, the proposed system found better accuracy as compared to other results. After practically implementation, this system gives the accurate results for the detection of sleepiness of driver. It detects the driver’s state such as Sleepy, Drowsy & Active. The proposed work found accuracy in term of parameter like: eye detection accuracy is 95% and drowsiness accuracy is 90%, it is approximately around 5–7% more accurate as compare to other existing work.

  • distraction
  • image processing technique

This is a preview of subscription content, log in via an institution .

Pauly, L., Sankar, D.: Detection of drowsiness based on HOG features and SVM classifiers. In: IEEE International Conference on Research in Computational Intelligence and Communication Networks (ICRCICN), pp. 8–14 (2015)

Google Scholar  

Ihaddadene, N.: Drowsy driver detection system using eye blink patterns. machine and web intelligence(ICMWI). In: International Conference on IEE, pp. 20–25 (2010)

Zhang, W., Cheng, B., Lin, Y.: Driver drowsiness recognition based on computer vision technology. In: IEEE, pp. 67–70 (2012)

Sabet, M., Zoroofi, R.A., Sadeghniiat-Haghighi, K., Sabbaghian, M.: A new system for driver drowsiness and distraction detection. In: 11 th IEEE Joint Conference on Information Science, pp. 120–125 (2012)

Sandberg, D.: conducted an analysis and optimization of systems for detecting sleepiness in drivers. Chalmers University in Goteborg, Sweden in, pp 80–87 (2018)

Malla, A., Davidson, P., Bones, P., Green, R., Jones, R.: Automated video-based measurement of eye closure for detecting behavioral microsleep. In: 32nd Annual International Conference of the IEEE, Buenos Aires, Argentina, pp. 10–14 (2010)

Viola, P., Jones, M.: Rapid object detection using a boosted cascade of simple features. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 55–60 (2001)

Hong, T., Qin, H., Sun, Q.: An improved real-time eye state identification system in driver drowsiness detection. In: Proceedings of the IEEE International Conference on Control and Automation, Guangzhou, CHINA, pp. 1–5 2007

Nixon, M.S., Aguado, A.S.: Feature Extraction and Image Processing. 2nd ed., Jordan Hill, Oxford OX2 8DP, UK (2008)

Bhowmick, B., Kumar, C.: Detection and classification of eye state in IR camera for driver drowsiness identification. In: Proceeding of the IEEE International Conference on Signal and Image Processing Applications, pp. 40–44 (2009)

Rezaee, K., et al.: Real-time intelligent alarm system of driver fatigue based on video sequences. In: Robotics and Mechatronics (ICRoM), First RSI/ISM IEEE International Conference on. pp. 80–83 (2013)

Du, Y., et al.: Driver fatigue detection based on eye state analysis. In: Proceedings of the 11th Joint Conference on Information Sciences, pp.1–6 (2008)

Choi, I.H., Hong, S.K., Kim, Y.G..: Real-time categorization of driver’s gaze zone using the deep learning techniques. In: 2016 IEEE International Conference on Big Data and Smart Computing (BigComp), pp. 1–25 (2016)

Bhop, R.A.: Computer vision based drowsiness detection for motorized vehicles with web push notifications. In: ICICCS, pp. 17–19 (2022)

Ananthi, S., Sathya, R., Vaidehi, K., Vijaya, G.: Driver drowsiness detection using image processing and I-ear techniques. In: IEEE ICICCS, pp. 45–50 (2023)

Hossain, M.L., Hasan, M.S., Safwan Ahmed, K.M.A: Developing an image processing based real-time driver drowsiness detection system. In: IJSRP, vol. 13, no. 4, pp. 1–7 (2023)

Download references

Author information

Authors and affiliations.

Sharad Institute of Technology College of Engineering, Yadrav, Ichalkaranji, Maharashtra, 146121, India

Govind Singh Patel, Shubhada Chandrakant Patil, Akshata Adinath Patil, Rutuja Pravin Dahotre & Tejas Jitendra Patil

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Govind Singh Patel .

Editor information

Editors and affiliations.

SR University, Warangal, India

Deepak Garg

COPELABS, Lusófona University, Lisbon, Portugal

Joel J. P. C. Rodrigues

Bennett University, Greater Noida, India

Suneet Kumar Gupta

Swansea University, Wales, UK

Xiaochun Cheng

Lovely Professional University, Phagwara, India

Pushpender Sarao

SITCOE Engineering College, Ichalkaranji, India

Govind Singh Patel

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Cite this paper.

Patel, G.S., Patil, S.C., Patil, A.A., Dahotre, R.P., Patil, T.J. (2024). Analysis and Implementation of Driver Drowsiness, Distraction, and Detection System. In: Garg, D., Rodrigues, J.J.P.C., Gupta, S.K., Cheng, X., Sarao, P., Patel, G.S. (eds) Advanced Computing. IACC 2023. Communications in Computer and Information Science, vol 2053. Springer, Cham. https://doi.org/10.1007/978-3-031-56700-1_4

Download citation

DOI : https://doi.org/10.1007/978-3-031-56700-1_4

Published : 26 March 2024

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-56699-8

Online ISBN : 978-3-031-56700-1

eBook Packages : Computer Science Computer Science (R0)

Share this paper

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • PMC10219078

Logo of jimaging

Real-Time Machine Learning-Based Driver Drowsiness Detection Using Visual Features

Yaman albadawi.

1 Department of Computer Science and Engineering, American University of Sharjah, Sharjah P.O. Box 26666, United Arab Emirates

Aneesa AlRedhaei

2 College of Engineering and Information Technology, Ajman University, Ajman P.O. Box 346, United Arab Emirates; ea.ca.inunamja@456112202

Maen Takruri

3 Center for Information, Communication and Networking Education and Innovation (ICONET), American University of Ras Al Khaimah, Ras Al Khaimah 72603, United Arab Emirates; [email protected]

Associated Data

Not applicable.

Drowsiness-related car accidents continue to have a significant effect on road safety. Many of these accidents can be eliminated by alerting the drivers once they start feeling drowsy. This work presents a non-invasive system for real-time driver drowsiness detection using visual features. These features are extracted from videos obtained from a camera installed on the dashboard. The proposed system uses facial landmarks and face mesh detectors to locate the regions of interest where mouth aspect ratio, eye aspect ratio, and head pose features are extracted and fed to three different classifiers: random forest, sequential neural network, and linear support vector machine classifiers. Evaluations of the proposed system over the National Tsing Hua University driver drowsiness detection dataset showed that it can successfully detect and alarm drowsy drivers with an accuracy up to 99%.

1. Introduction

Drowsiness is a major concern with respect to road safety. Drivers’ unconsciousness due to microsleep can frequently lead to destructive accidents. Falling asleep at the wheel is usually related to lack of sleep, exhaustion, or mental health problems. In the UAE, the ministry of interior recorded 2931 car crashes in 2020. The number increased in 2021 to 3488 records. The majority of these traffic accidents were caused by distracted driving due to drowsiness, sudden swerving, or failure to maintain a safe distance between vehicles [ 1 ]. In this situation, it is crucial to exploit new technologies to plan and design systems that can track drivers and estimate their level of attention while driving. As multiple countries are concerned regarding this issue, researchers worldwide worked on building Driver Drowsiness Detection (DDD) systems that are capable of detecting drivers’ drowsiness signs in the early stages.

According to the literature, drowsiness detection systems can be grouped into three categories based on the measures that are used to detect the drowsiness signs [ 2 , 3 , 4 , 5 ]: biological-based, vehicle-based, and image-based systems. In the first category, biological-based measures rely on monitoring the body’s physiological signals including, ElectroEncephaloGraphy (EEG), ElectroCardioGraphy (ECG), ElectroMyoGraphy (EMG), Electro-OculoGraphy (EOG) signals, and blood pressure [ 6 , 7 , 8 , 9 ]. In this type of system, drowsiness is determined by detecting the signal’s deviation from the standard state’s characteristics and analyzing if the new signal indicates drowsiness. In the second category, vehicle-based measures depend on monitoring variations in the car’s movement patterns through different sensors’ installed to measure various vehicle and street parameters. To infer the drowsiness level, vehicle-based systems analyze the changes or abnormal behavior of the car, including, for example, the steering wheel angle, speed, or deviation from the lane [ 10 , 11 ]. The third category is the image-based measures which depend mainly on the drowsiness signs that appear on the driver’s face and head. These systems detect drowsiness by monitoring the drivers’ head movements and facial parameters such as the eyes, mouth facial expressions, eyebrows, or respiration [ 12 , 13 , 14 ].

All three categories have some limitations [ 2 , 15 ]. Biological-based systems can detect drowsiness in the initial stages due to their ability to compare the continuous changes in the physiological signals, but, in most biological-based systems, it is demanded that electrodes be connected to the driver’s body. This setup is usually inconvenient and uncomfortable for the driver. It also involves noise that affects the signal quality, leading to decreased accuracy. Vehicle-based systems depend generally on vehicle types, and can greatly be affected by multiple factors, including road characteristics, climate conditions, and the driver’s experience, habits, and ability to drive. Limitations of the image-based systems are strictly related to the quality of the camera used and its adaptability to different lighting conditions. The existence of objects covering parts of the face, such as glasses, sunglasses, masks, etc., can also affect the accuracy of image-based DDD systems. However, among these three systems, image-based systems are considered to be fully non-invasive, low cost, and minimally affected by road conditions. Therefore, image-based measures are widely deployed to develop versatile, affordable, real-time and, fully portable DDD devices [ 2 , 12 , 13 , 14 , 16 , 17 ].

In this work, we present a new image-based DDD system. It uses a unique combination of features derived from the driver’s facial parameters to train and test three classifiers, namely Random Forest (RF), sequential Neural Networks (NN), and linear Support Vector Machine (SVM). The features used in this system are Eye Aspect Ratio (EAR), Mouth Aspect Ratio (MAR), and head pose estimation. The proposed system is convenient for the driver in the sense that it does not require any sensors or equipment to be attached to the driver’s body. It is adaptable to be used in different vehicles, including buses, cars, motorcycles, and others. Evaluations of the proposed system on the National Tsing Hua University DDD (NTHUDDD) video dataset show that it can achieve accuracy up to 99%, indicating that it is an effective solution.

The rest of the paper is organized as follows: Section 2 summarizes the recent studies relating to the features used in this work. The methodology is presented in Section 3 . Section 4 presents and discusses the results. The last section states the conclusions and the future directions.

2. Related Work

The problem of driver drowsiness detection has been studied by many researchers worldwide. The proposed approaches to tackle the problem can be mainly differentiated based on the drowsiness indicative features used [ 2 ]. Driver drowsiness indicative features obtained from body signs measurements (such as EEG, ECG, PPG, and EMG) are referred to as biological features, which, although accurate in detecting drowsiness, are inconvenient for the driver as they involve the use of sensors attached to the driver’s body [ 2 , 6 , 7 , 8 , 9 ]. Other widely used driver drowsiness indicative features are based on vehicle driving patterns where measurements such as the steering wheel angle and lane departure frequency are related to the driver drowsiness levels [ 2 ]. Although convenient for the driver, the literature shows that the accuracy of this method is not high [ 10 , 11 ]. The third drowsiness indicative features are image based. They are usually obtained from videos monitoring the driver’s behavior to extract features relating to the driver’s eye, mouth, and head movements [ 2 ]. They are more convenient for the driver than the biological-based ones as they do not involve attaching equipment or sensors to the driver’s body.

Image-based systems are the most commonly used techniques for detecting driver drowsiness. Facial parameters such as the eyes, mouth, and head can be used to identify many visual behaviors that fatigued people exhibit. Such drowsy behaviors can be recorded by cameras or visual sensors. Then, from these records, several features can be extracted, and by using computer vision techniques they are analyzed to visually observe the driver’s physical condition in order to detect drowsiness in a non-invasive manner. Broadly, image-based systems are categorized into three categories depending on the observation of the eyes, mouth, and head movements [ 2 ]. Various image-based features have been used in the literature. These include blink frequency, maximum duration of closure of the eyes [ 13 ], percentage of eyelid closure [ 18 ], eye aspect ratio [ 19 ], eyelids’ curvature [ 17 ], yawning frequency [ 20 ], MAR [ 21 ], mouth opening time [ 22 ], head pose [ 23 ], head-nodding frequency [ 4 ], and head movement analysis [ 24 ]. Combinations of these features have been considered as well [ 20 , 21 , 25 ]. In this section, we provide a detailed explanation of the features that are used in our proposed system.

The most common features used to detect drowsiness in image-based systems are extracted from the eye region. Several researchers proposed the EAR [ 26 , 27 , 28 ] as a simple metric to detect eye blinking using facial landmarks. It is utilized to estimate the eye openness degree. A sharp drop in the EAR value leads to a blink being recorded.

Maior et al. [ 27 ] developed a drowsiness detection system based on the EAR metric. They calculated the EAR values for consecutive frames and used them as inputs for machine learning algorithms including the multilayer perceptron, RF, and SVM classification models. Their evaluation results showed that the SVM performed the best with 94.9% accuracy. The EAR metric was also used in [ 29 ], who explored drowsiness as an input for a binary SVM classifier. The model detected the driver’s drowsiness state with 97.5% accuracy.

Mouth behavior is a good indicator of drowsiness as it provides useful features for DDD. In [ 30 ], the authors proposed to track mouth movement to recognize yawning as a drowsiness indicator. In their experiment, they used a dataset of 20 yawning images and over 1000 normal images. The system used a cascade classifier to locate the driver’s mouth from the face images, followed by an SVM classifier to identify yawning and alert the driver. The final results gave a yawning detection rate of 81%. Another mouth-based feature is the mouth opening ratio [ 29 ]. It is also referred to as the MAR [ 21 ]. It describes the opening degree of the mouth as an indicator for yawning. This feature was fed to an SVM classifier in [ 29 ], achieving an accuracy of 97.5%.

Another useful parameter for detecting drowsiness in image-based systems is head movements which can signal drowsy behavior. Accordingly, they can be used to derive features that are useful for detecting drowsiness using machine learning. Such head features include head-nodding direction, head-nodding frequency [ 4 ], and head pose [ 31 ]. In [ 31 ], the forehead was used as a reference to detect the driver’s head pose. Infrared sensors were used in [ 24 ] to follow the head movement and detect the driver’s fatigue. In [ 32 , 33 ], before head position analysis was performed, a special micro-nod detection sensor was used in real-time to track the head pose feature in 3D.

Moujahid et al. [ 20 ] presented a face-monitoring drowsiness-detection system that captured the most prominent drowsiness features using a hand-crafted compact face texture descriptor. Initially, they recorded three drowsiness features, namely head nodding, yawning frequency, and blinking rate. After that, they applied pyramid multi-level face representation and feature selection to achieve compactness. Lastly, they employed a non-linear SVM classifier that resulted in an accuracy of 79.84%.

Dua et al. [ 34 ], introduced a driver drowsiness-detection architecture that used four deep learning models: ResNet, AlexNet, FlowImageNet, and VGG-FaceNet. These models are extracted from the driver’s footage features that include head gestures, hand gestures, behavioral features (i.e., head, mouth, and eye movements), and facial expressions. Simulated driving videos were fed to the four deep learning models. The outputs of the four models were fed to a simple averaging ensemble algorithm followed by a SoftMax classifier, which resulted in 85% overall accuracy.

3. Methodology

The methodology followed to develop the proposed DDD system is presented in detail in this section. Firstly, the system design is illustrated. Secondly, a dataset description is provided. Lastly, the four main steps followed in the implementation process are discussed, which are (1) preprocessing, (2) feature extraction, (3) data labeling, and (4) classification.

3.1. System Design

The flowchart in Figure 1 shows the design flow of the proposed drowsiness-detection system. The system design consists of five main steps. In the first step, the system starts by capturing a video that monitors the driver’s head and extracts frames from it. The second step is preprocessing, where first, the Blue, Green, and Red (BGR) colored frames are each converted to grayscale. Then, for the eyes and mouth region, face detection is applied by utilizing the Dlib Histogram of Oriented Gradients (HOG) face detector [ 35 ]. The Dlib facial landmarks detector is then applied to extract the eyes and mouth regions. Lastly, in the preprocessing step, to capture the head region, MediaPipe face mesh [ 36 ] is used to obtain a 3D map of the face and extract the 3D nose coordinates to use as a reference to estimate the driver’s head position.

An external file that holds a picture, illustration, etc.
Object name is jimaging-09-00091-g001.jpg

System design.

The third step involves calculating for each frame a feature vector containing the EAR, MAR, and the nose X–Y coordinates, and storing them in a separate list. This is repeated to populate a window (matrix) with feature vectors corresponding to 15 consecutive frames. Once the system has the first 15 feature vectors stored, it feeds them to the trained classification model which results in initial drowsy or alert labels. The final decision of whether the driver is drowsy is taken if the drowsy label is produced 15 consecutive times and an alarm will sound to alert the driver. Otherwise, the driver will be considered alert. As the process continues, the system employs the moving window concept. The moving window is fixed in size and can only take 15 feature vectors corresponding to a matrix of dimension 4 × 15. When a new frame is recorded, its corresponding feature vector is fed into the feature window while the oldest feature vector in the window is dropped out.

Accordingly, the first decision about the driver drowsiness status is given by the system after 1 s, as the system waits to populate the window with 15 feature vectors, followed by counting 15 classifiers labels; i.e., the first decision requires recording 30 frames: 15 to populate the feature window, and 15 label counts. Referring to the moving window discussed above, the following decisions, in contrast, are taken almost instantly. When a new frame is recorded, its corresponding feature vector is fed into the feature window while the oldest feature vector in the window is dropped out. In this case, we have now a full window with 15 feature vectors and 14 previous labels, and the current (new) label which accounts for a time period of 1 frame (1/30 s = 33 ms). A new decision requires the introduction of one new frame which spans 33 ms. Therefore, considering that the preprocessing time and the classification times are minimal, our system’s first decision takes 1 s, while the following decisions will be reported every 33 ms, indicating that the response can be considered as being in real time.

3.2. Dataset

In this work, the NTHUDDD video dataset was used to implement this DDD system [ 37 ]. The dataset was obtained under simulated driving conditions. A total of 36 subjects were recorded while sitting on a chair playing a driving game with a simulated driving wheel and pedals, with their facial expressions monitored for drowsiness signs. Active infrared (IR) illumination was used to acquire IR videos in the dataset collection. The videos under consideration in this work were taken at a rate of 30 frames/s with a resolution of 640 × 480 pixels and an overall length of 9 h and a half. They were recorded in AVI format.

The 36 subjects were of various ethnicities, genders, and facial characteristics. They were recorded under different scenarios with and without glasses or sunglasses under a variety of simulated driving conditions during the day and night times. Various subject behaviors were recorded including normal driving, talking, turning around, slow eye blinking, yawning, and head nodding. Figure 2 shows some of these behaviors. Table 1 illustrates a further description of the dataset. This work has utilized 23 subjects from the NTHUDDD dataset: 18 for training and 5 for testing. The subject selection was based on the different facial appearances and scenarios including wearing/not wearing eyeglasses.

An external file that holds a picture, illustration, etc.
Object name is jimaging-09-00091-g002.jpg

Drivers’ behaviors.

NTHUDDD dataset description.

3.3. Preprocessing

For preprocessing, the colored frames are each converted to grayscale. Then, to obtain the eyes and mouth features, the face was extracted by utilizing Dlib’s HOG face detector, where the detector function returned a rectangle’s coordinates, which surround the face region. Following that, the Dlib facial landmarks solution was utilized. This solution estimates the location of 68 points on the face, forming a map that represents the key facial structures on the face, as shown in Figure 3 a [ 19 ]. Thus, it was used to detect and extract the eye and mouth regions.

An external file that holds a picture, illustration, etc.
Object name is jimaging-09-00091-g003.jpg

The map of the two landmarks solutions that were used. ( a ) Dlib facial landmarks solution map. ( b ) MediaPipe face mesh solution map.

For the head pose estimation feature, we used the MediaPipe face mesh solution [ 36 ], which is a face geometry solution that is used to estimate 468 face landmarks in 3 dimensions, as shown in Figure 3 b. The X and Y output coordinates of the face mesh solution are normalized based on the frame size. While the z coordinate represents the face mesh depth which reflects the distance of the head from the camera. In order to estimate the head pose in the captured video, the initial nose coordinates were first extracted to be used as a reference for the head location and movements in the following frames.

3.4. Feature Extraction

Various human and vehicle features were used to model different drowsiness detection systems. However, in this work, the modeling is based on the EAR and MAR metrics along with drowsy head pose estimation.

3.4.1. EAR Metric

According to Rosebrock [ 19 ], detecting blinking using the EAR feature has multiple advantages compared to detection with traditional image-processing methods. In traditional methods, first eye localization is applied. Then, thresholding is used to find the whites of the eyes in the image. Following that, eye blinking is indicated by detecting the disappearance of the eye’s white region. In contrast, no image processing is needed when using the EAR metric. Thus, using it will require less memory space and processing time. Instead, the EAR feature depends on calculating the ratio of the distance between eyes’ facial landmarks, which makes it a straightforward solution. In general, the EAR metric computes a ratio extracted from the horizontal and vertical distances of six eye landmark coordinates, as shown in Figure 4 [ 38 ]. These coordinates are numbered from the left eye corner starting from p1 and revolving clockwise to p6. Rosebrock [ 19 ] explains that all six coordinates from p1 to p6 are two-dimensional. According to [ 39 ], in the case of open eyes, the EAR value remains approximately constant. However, if the eyes were closed, the difference between coordinates p3 and p5 and p2 and p6 demolishes; thus, the EAR value drops down to zero, as illustrated in Figure 4 .

An external file that holds a picture, illustration, etc.
Object name is jimaging-09-00091-g004.jpg

EAR change over time [ 38 ].

In order to extract the EAR feature, Equation ( 1 ) was utilized. As shown in the equation below, to compute the EAR ratio value, the numerator calculates the distance between the vertical landmarks. While the denominator calculates the distance between the horizontal landmarks and multiplies it by two to balance it with the nominator [ 39 ]. By utilizing Equation ( 1 ), the EAR values were calculated for each frame and stored in a list.

3.4.2. MAR Metric

Similar to the EAR, the mouth aspect ratio, or MAR, is used to calculate the openness degree of the mouth. In this facial landmark, the mouth is characterized by 20 coordinates (from 49 to 68), as shown in Figure 3 a. However, we used points from 61 to 68, as displayed in Figure 5 , to obtain the mouth openness degree. Using these coordinates, the distance between the top lip and the bottom lip is calculated using (2) to determine whether the mouth is open or not [ 40 ]. In (2), the numerator calculates the distance between the vertical coordinates, and the denominator calculates the distance between the horizontal coordinates. Similarly to (1), the denominator is multiplied by two to balance it with the nominator. As shown in Figure 6 , increasing the value of the MAR indicates the mouth is open.

An external file that holds a picture, illustration, etc.
Object name is jimaging-09-00091-g005.jpg

The 8 coordinates used to calculate the MAR [ 40 ].

An external file that holds a picture, illustration, etc.
Object name is jimaging-09-00091-g006.jpg

MAR change over time [ 40 ].

3.4.3. Drowsy Head Pose

In this work, head pose estimation was achieved by finding the rotation angle of the head. The rotation angle can be defined as the amount of rotation of an object around a fixed point referred to as the point of rotation. To find the rotation angle of the head, first, the center nose landmark was acquired using MediaPipe face mesh for use as a reference and as the point of rotation for the head position in the frame, as mentioned earlier in preprocessing. Then, the nose’s X and Y landmarks were normalized by multiplying them by the frame width and height, respectively. Following that, by taking the initial nose 3D coordinates as the point of rotation, the rotation angles of the X and Y axis are calculated and used to estimate if the head position is up, down, left, or right based on a set of thresholds. We have estimated the angle thresholds as follows:

  • Head pose up, if X of angle 7 °
  • Head pose down, if X of angle −7 °
  • Head pose right, if Y of angle 7 °
  • Head pose left, if Y of angle −7 °

3.5. Data Labeling

According to [ 39 , 41 ], blinking is a quick movement of closing and reopening the eyes, which approximately takes between 100 to 400 ms, while yawning is a quick act of opening and closing the mouth, which lasts for around 4 to 6 s. As for a drowsy head pose, it can be described as random head titling due to severe drowsiness that is usually associated with eye closure, and it may last for a few seconds. Blinking, yawning, and head pose patterns differ depending on the person, action duration, degree of opening or closure, degree of head tilting, and speed. Moreover, one reading of EAR, MAR, and X and Y nose coordinates per frame is not enough to capture the event of blinking, yawning, or drowsy head pose. Thus, in order to detect the different drowsy action patterns, we have used four fifteen-frame length vectors, for each of the four readings, consecutively, as an input to the classifiers.

It is well known that when a person starts feeling sleepy that the eye-closing time becomes longer. As a result, we label in this work a blink of 400 ms or longer as indicative of a drowsy driver. Given that the videos were taken at a frame rate of 30 frames/s, i.e., the frame time is 1/30 s, then a drowsy blink will span at least 13 frames. Taking into consideration that people can statistically vary in their eye closure time when they start feeling sleepy, we relax the 400 ms to 500 ms, which spans 15 frames, as was the case in [ 42 ].

In order to verify our assumption, we tested different temporal window sizes during the labeling phase, including 9, 13, 15, 17, and 21 frames (see Table 2 ). By doing that, we aimed to experimentally figure out the number of frames that better capture the different events of eye closure, yawning, and drowsy head pose. Our tests were conducted on three randomly labeled subjects from our training dataset. As shown in Table 2 , smaller windows resulted in detecting more drowsy cases because short eye blinks (less than 400 ms) were considered as drowsy while they are, in fact, not drowsy. On the other hand, long windows resulted in some real drowsy cases being missed or not detected. The results reported in the table supported our initial decision of using a 15-frame-long temporal window as it is the case that mostly matched the video drowsiness labels. Consequently, a window of 15 frames in length was adopted.

Varying the temporal window size while labeling the training dataset.

This temporal window was used to prepare the input data as follows: for every 30 frames/s video, the MAR value of the Nth frame is calculated and stored in a list, along with the MAR values from the N − 7 and N + 7 frames. Following that, these 15 MAR values are concatenated, forming a 15-dimensional feature vector for that Nth frame. In this case, we are taking the 7 neighboring frames (from each side) for each Nth frame in order to capture the actual state of the mouth at that frame, either close or open. The same method was applied to prepare the EAR, x, and y nose coordinates input vectors, resulting in a final input of four 15-frame long input vectors.

Labeling the training input data was a two-step process, where first, the eyes, mouth, and head state are labeled separately. Then, a final label of the driver’s state was given. As for the eyes, an EAR threshold of 0.2 was set to reflect if the drivers’ eyes were open or not. For the mouth, the MAR was given a 0.5 threshold to indicate if the mouth was wide open. In terms of the head, nose coordinates were given a set of angle thresholds to reflect the different poses that a drowsy driver’s head may position at, as explained previously. After labeling the state of these three parts, a final label was given of either 0 (alert) or 1 (drowsy) to indicate if the driver was drowsy or not. Label 1 was given if either of these states were met, or if a closed eye, open mouth, or drowsy head pose was present.

When choosing the thresholds, we studied the maximum EAR (MAX EAR) and maximum MAR (MAX MAR) of different eyes and mouth shapes and sizes in the 18 subjects from our training dataset, as shown in Table 3 . MAX EAR reflects the EAR value at the regular openness state of the eyes, and MAX MAR reflects the maximum MAR value that takes place when yawning. We found out that most of the subjects have a MAX EAR range between 0.3 and 0.37. However, we still need to consider the cases of subjects with small eyes, whose MAX EAR value reached a minimum of 0.23. Thus, we experimented with different thresholds during the labeling stage, as illustrated in Table 4 . According to Table 4 , at a threshold value greater than 0.4, all data frames of all the subjects were labeled “Closed eyes” regardless of the eye state, as none of the subjects in the training dataset has a MAX EAR greater than 0.37. Threshold values between 0.35 and 0.25 had a similar issue as they did not work with subjects of MAX EAR value of 0.34 and below. At the threshold value of 0.2, all the subjects got labels of “Open eyes” or “Closed eyes” successfully without any bias. Lastly, threshold values that were less than 0.2 worked as well, but they reduced the “Closed eyes” labels in the training dataset. Thus, taking into consideration both subjects with small eyes and having a balanced training dataset, we decided to set an EAR threshold value of 0.2 to identify the drowsy eyes from the alert.

MAX EAR and MAR values for the training set subjects.

EAR threshold experimental testing while labeling the training dataset.

* Chosen EAR threshold value is in bold.

Similarly, for the MAX MAR values, we noticed in Table 3 that the majority of the drivers reach a MAX MAR value of 0.9 when yawning. However, drivers with small mouths can reach a MAX value of 0.6 or 0.7 depending on the size of the mouth and the way of yawning. Thus, we applied some experiments during the labeling stage to choose the best MAR threshold, as shown in Table 5 . According to Table 5 , at a threshold value greater than 0.9, all data frames of all the subjects were labeled “Closed mouth” regardless of the mouth state, as the MAX MAR value for the subjects in the training set is 0.9. For threshold values between 0.8 and 0.6, we noticed a similar issue as the frames of subjects with MAX MAR of 0.79 or below were always labeled as “Closed mouth.” At the threshold value of 0.5, we have successfully labeled all subjects with a label “Open mouth” or “Closed mouth,” reflecting the true state of the mouth. Any threshold value below 0.5 caused some frames to be mislabeled in cases such as talking or laughing. Therefore, we decided to set the MAR threshold to a minimum value of 0.5 to address any unique cases.

MAR threshold experimental testing while labeling training dataset.

* Chosen MAR threshold value is in bold.

3.6. Classification

After labeling the extracted values, two main machine learning data preprocessing steps were performed. First, data balancing is an essential step when dealing with unbalanced instances between the two classes. In our case, there were 300,266 non-drowsy labeled as 0 cases and 72,658 drowsy cases labeled as 1. Using under-sampling and over-sampling from the imbalanced learning library, we over-sampled the minority class (labels 1) and under-sampled the majority class (labels 0).

The second preprocessing step is data splitting, where a data splitting function from the scikit-learn library was utilized. The data were split into 70% training and 30% testing. The training data was used to train and create the models, while the testing data was utilized to test the performance of the models. After splitting the dataset, three classification models were applied: RF, sequential NN [ 43 ], and SVM. Then, the parameters of the three models were tuned and optimized by utilizing grid search hyperparameters [ 28 ].

Random forest (RF) is a popular and effective machine learning algorithm, created by Breiman [ 44 ]. It involves constructing a group of decision trees that work together to make predictions. The trees are created using bootstrap samples and randomly selecting variables at each node. The RF model combines the predictions of each tree to determine the final prediction. In this study, the scikit-learn library’s RF classifier was used with “entropy” as the criterion parameter and 50 trees in the forest.

The sequential neural network (NN) model, also known as the feedforward neural network, is the basic type of neural network model [ 43 ]. In this study, we used the Keras library to build our neural network model. Keras offers an easy way to build models using the sequential approach, where each layer is added one at a time with weights corresponding to the following layer. In this work, a neural network with six layers was created, consisting of an input layer, four hidden layers with five nodes each using ReLU activation, and an output layer with one node using sigmoid activation. The model classifies the output as either 1 for drowsy or 0 for nondrowsy.

Support vector machine (SVM) [ 45 ] is a supervised machine learning model that classifies two groups of data by finding a hyperplane in N dimensions. The goal is to select the hyperplane that maximizes the margin between data points, which improves future classification accuracy. The SVM model is popular because it has low computational complexity and high accuracy. The support vector classification (SVC) from the scikit-learn library was used with a linear kernel, a regularization parameter of C = 1, probability estimates enabled, and the random state parameter was set to 0 to control data shuffling.

4. Results and Discussion

This section lists the specifications of our development environment. In addition, it presents and discusses the results of the trained models using the testing data that was extracted from the NTHUDDD dataset. By finding the confusion matrix, accuracy, sensitivity, specificity, macro precision, and macro F1-score, and through two visual plots of the results, the best model for drowsiness detection was determined. This section also compares the results of the proposed system with other DDD systems.

While implementing this system, we used a laptop equipped with an i7 processor, 16 GB RAM, and an integrated GPU (Intel(R) UHD Graphics 620). As for the development environment, we used Jupyter Notebook in Anaconda and developed the system using Python 3.7. We mainly used scikit-learn 1.1, TensorFlow 2.12, Keras 2.12, Dlib 19.24.1, OpenCV 4.7.0, and MediaPipe 0.9.3.0 libraries and packages.

The implementation was performed in two steps, namely, the training step and the testing step. In the training step, the model was trained offline on the precollected NTHUDDD standard dataset. In the testing step, the video footage of the driver’s face was taken at 30 frames/s by a webcam fixed at the center of the car’s dashboard. The webcam fed the video frames to a laptop that was preloaded with the trained DDD model. The trained DDD model extracted the feature vector corresponding to each frame and classified it in a time period of (2–4 ms), which is negligible compared to the 33 ms time span between one frame and the other, thus, making the decision mainly dependent on the frame time (33 ms) and meaning it can therefore can be considered a real-time decision system.

Table 6 illustrates the results of the trained models. The results show that the best performance is achieved by the RF model. When analyzing the results, it is evident that the RF model gave an almost perfect performance as it achieved 99% in accuracy, sensitivity, specificity, macro precision, and macro F1-score. In terms of the performance of the sequential NN model, it achieved second-best results with 96% accuracy, 97% sensitivity, and 96% specificity, macro precision & macro F1-score. As for the SVM model, it achieved the lowest results, where it showed 80% accuracy, 70% sensitivity, and 88% specificity.

Results of the proposed DDD system.

Figure 7 and Figure 8 show the precision, recall, and the Receiver Operating Characteristic (ROC) curves. Those curves were plotted as means of visualizing the three models’ performance. As for the ROC curve, it is usually used for binary classification models to describe its performance by showing if the model predicts the outcome as a positive class when it is actually positive [ 46 ].

An external file that holds a picture, illustration, etc.
Object name is jimaging-09-00091-g007.jpg

ROC curve for the testing data.

An external file that holds a picture, illustration, etc.
Object name is jimaging-09-00091-g008.jpg

The precision–recall curve for the testing data.

A score called Area Under the Curve (AUC) can be calculated to reflect the total area under the ROC curve and the separability degree. It is important to note that if a model shows a high AUC value, then it is better at predicting the actual outcomes of the true negative and true positive classes. The ROC curve for the testing data is presented in Figure 7 . Similarly, the precision–recall curve is a perfect evaluating tool for binary classification models [ 46 ]. In this curve, if a model showed a high AUC score, that indicates a better predicting performance. Figure 8 shows the precision–recall curve for the testing data. As can be seen in Figure 7 , both the RF and sequential NN models achieved a high AUC score. However, the AUC score of the SVM model was noticeably lower, which reached 0.867. Regardless, when comparing the three curves, it can be seen that the best performance was achieved by the RF model, with a 0.999 AUC score. Likewise, looking at Figure 8 , the RF model gave an AUC score of 0.999, which reflects the best performance, compared to both the sequential NN and the SVM models, which gave a score of 0.991 and 0.855, respectively.

The above discussion clearly shows that our proposed system can differentiate drowsy drivers from alert ones. It is easy to use and convenient for the drivers as it is non-invasive, non-intrusive, and does not require any sensors or equipment to be attached to the driver’s body. It is also adaptable to be used in different vehicles, including buses, trucks, cars, motorcycles, and construction vehicles. Table 7 presents the most recent literature on drowsiness-detection systems. Due to the different utilization of the datasets and the features, one-to-one comparison is not applicable. However, as illustrated, our RF model outperforms the other techniques available in the literature. Nevertheless, it is important to note that the system has some limitations. The HOG face detector can fail in some scenarios. Some of these include having more than one subject in the frame, variation in the intensities while driving, and driving on a dark street.

Comparison of the proposed method with similar techniques.

5. Conclusions

In conclusion, in this paper, we proposed a real-time image-based drowsiness-detection system. In order to implement drowsiness detection, a webcam was used to detect the driver in real time and extract the drowsiness signs from the eyes, mouth, and head. Then three classifiers were applied at the final stage. When a drowsiness sign is detected, an alarm sounds, alerting the driver and ensuring road safety. Evaluation of system performance over the NTHUDDD dataset resulted in an accuracy of 99% for the RF classifier.

In the future, we plan to develop a mobile application to allow users to easily use the system while driving. Furthermore, to overcome the limitation of the HOG face detector, we intend to use a more advanced camera that can adapt to the changes in lighting intensity and automatically detect and focus on the driver’s face.

Abbreviations

The following abbreviations are used in this manuscript:

Funding Statement

This research received no external funding.

Author Contributions

Conceptualization, Y.A. and A.A.; methodology, Y.A., A.A. and M.T.; software, Y.A. and A.A.; validation, Y.A., A.A. and M.T.; data curation, Y.A. and A.A.; writing—original draft preparation, Y.A. and A.A.; writing—review and editing, Y.A., A.A. and M.T.; visualization, Y.A. and A.A. All authors have read and agreed to the published version of the manuscript.

Institutional Review Board Statement

Informed consent statement, data availability statement, conflicts of interest.

The authors declare no conflict of interest.

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

IMAGES

  1. (PDF) A Review of Driver Drowsiness Detection Systems: Techniques

    literature review on driver drowsiness detection

  2. (PDF) Driver Drowsiness Detection System Using Machine Learning

    literature review on driver drowsiness detection

  3. (PDF) Alert System for Driver Drowsiness using Real Time detection

    literature review on driver drowsiness detection

  4. Driver Drowsiness Detection System with OpenCV & Keras

    literature review on driver drowsiness detection

  5. (PDF) Driver Drowsiness Detection System and Techniques: A Review

    literature review on driver drowsiness detection

  6. (PDF) Driver’s Drowsiness Detecting and Alarming System

    literature review on driver drowsiness detection

VIDEO

  1. Driver safety monitor: Real time eye blink and drowsiness detection with opencv #python #opencv

  2. DRIVER DROWSINESS DETECTION SYSTEM

  3. Drowsiness Detection

  4. Drivers drowsiness detection using image processing

  5. Drowsiness detection and alerting system using opencv python

  6. Drowsiness Detection while driving

COMMENTS

  1. A Review of Recent Developments in Driver Drowsiness Detection Systems

    In order to detect the different stages of drowsiness, researchers have studied driver responses and vehicle driving patterns. In this section, we provide a review of the four widely used measures for DDD. The diagram in Figure 1 illustrates all the currently used measures for classifying driver drowsiness levels.

  2. A systematic review on detection and prediction of driver drowsiness

    Abstract. Driver drowsiness has emerged as one of the key factors in recent times' traffic accidents, which can result in fatalities, serious physical losses, large monetary losses, and significant property damage. Drowsiness in a driver can be brought on by long hours behind the wheel, drowsiness, fatigue, medicine, difficulty sleeping, and ...

  3. Literature Review on Driver's Drowsiness and Fatigue Detection

    Traffic accidents always cause great material and human losses. One of the most important causes of these accidents is the human factor, which is usually caused by fatigue or drowsiness. To address this problem, several approaches were proposed to predict the driver state. Some solutions are based on the measurement of the driver behavior such as: the head movement, the duration of the blink ...

  4. (PDF) A Review of Recent Developments in Driver Drowsiness Detection

    The paper illustrates and reviews recent systems using different measures to track and detect drowsiness. Each system falls under one of four possible categories, based on the information used ...

  5. (PDF) A Review of Driver Drowsiness Detection Systems: Techniques

    Drowsiness means difficulty staying awake, which can lead to falling asleep. This paper introduces a literature review of driver drowsiness detection systems based on an analysis of physiological ...

  6. A systematic review of physiological signals based driver drowsiness

    Caryn FH, Rahadianti L (2021) Driver drowsiness detection based on drivers' physical behaviours: a systematic literature review. Comput Eng Appl J 10:161-175. Google Scholar Chang T-C, Wu M-H, Kim P-Z, Yu M-H (2021) Smart driver drowsiness detection model based on analytic hierarchy process. Sens Mater 33:485-497

  7. PDF A Review of Driver Drowsiness Detection Systems: Techniques, Advantages

    study determined that 4.8% of trips involved a driver in KSS levels 6-9, for which drowsy driving is present (See Figure 1). Most drowsy drivers report only low levels of drowsiness (Some signs of drowsiness = 3.3%). A driver makes only 0.5% of trips with "Sleepy, some effort to keep alert" and 0.1% with "Extremely Sleepy, fighting sleep".

  8. Literature Review on Driver's Drowsiness and Fatigue Detection

    Request PDF | On Jun 9, 2020, Hamed Laouz and others published Literature Review on Driver's Drowsiness and Fatigue Detection | Find, read and cite all the research you need on ResearchGate

  9. [2206.07489] A Review of Driver Drowsiness Detection Systems

    Driver Drowsiness is one of the most factors of road accidents, leading to severe injuries and deaths every year. Drowsiness means difficulty staying awake, which can lead to falling asleep. This paper introduces a literature review of driver drowsiness detection systems based on an analysis of physiological signals, facial features, and driving patterns. The paper also presents and details ...

  10. System and Method for Driver Drowsiness Detection Using Behavioral and

    For driver drowsiness detection, a combination of behavioral and sensor-based physiological measures was deployed on eight individuals between 25 and 48 years of age. ... The literature review indicates that combining behavioral measures, which are non-intrusive, with sensor-based physiological measures, which are intrusive, produces better ...

  11. Detecting Driver Drowsiness Based on Sensors: A Review

    Statistics indicate the need of a reliable driver drowsiness detection system which could alert the driver before a mishap happens. Researchers have attempted to determine driver drowsiness using the following measures: (1) vehicle-based measures; (2) behavioral measures and (3) physiological measures.

  12. PDF A systematic review of physiological signals based driver drowsiness

    detection system helps in timely fatigue and drowsiness detection that can help decrease the number of accident rates, and financial loss and save lives. Driver drowsiness approaches can be categorized with respect to several parameters. For example, considering the drowsiness detection technique, it can be grouped into image-based,

  13. (PDF) A systematic review of physiological signals based driver

    A flow of typical ECG based driver drowsiness detection approach An experimental set up to detect drowsiness using EEG signals (left), placement of electrodes as per international 10-20 system ...

  14. PDF Detection of Drowsiness among Drivers Using Novel Deep Convolutional

    Literature Review Driver drowsiness detection techniques fall into three main categories. The first cate-gory is the biological feature technique that involves analyzing physiological signals [15], skin temperature, and galvanic skin response (GSR) to measure physical conditions that

  15. Literature Review on Driver's Drowsiness and Fatigue Detection

    A literature review on the recent related works in this field of measurement approach based on the measurements of the physiological signals to get information about the internal state of the driver's body is presented. Traffic accidents always cause great material and human losses. One of the most important causes of these accidents is the human factor, which is usually caused by fatigue or ...

  16. Detection of Drowsiness among Drivers Using Novel Deep Convolutional

    2. Literature Review. Driver drowsiness detection techniques fall into three main categories. The first category is the biological feature technique that involves analyzing physiological signals [], skin temperature, and galvanic skin response (GSR) to measure physical conditions that change with the level of drowsiness or fatigue [16,17,18,19].The second category is vehicle movement indicator ...

  17. Advanced Detection of Driver Drowsiness Using CNN-LSTM Robust Approach

    Drowsy driving is one of the major leaders of road accidents and poses a serious safety concern for self as well as others on the road. Driver fatigue inhibits the ability of drivers to sense danger and avert it quickly., and the concept of focus is completely compromised, leading to increased incidents of road accidents. This paper proposes an approach for detection of facial features and eye ...

  18. Detecting Driver Drowsiness Based on Sensors: A Review

    In recent years, driver drowsiness has been one of the major causes of road accidents and can lead to severe physical injuries, deaths and significant economic losses. Statistics indicate the need of a reliable driver drowsiness detection system which could alert the driver before a mishap happens. Researchers have attempted to determine driver drowsiness using the following measures: (1 ...

  19. Analysis and Implementation of Driver Drowsiness, Distraction, and

    Abstract. This work presented analysis and implementation of driver drowsiness, distraction, and detection systems using image processing techniques. The literature review based on drowsiness, distraction, and detection have been taken with their parameters in tabulation form. Flow charts of software and hardware have been presented for the ...

  20. Sensors

    Detecting drowsiness among drivers is critical for ensuring road safety and preventing accidents caused by drowsy or fatigued driving. Research on yawn detection among drivers has great significance in improving traffic safety. Although various studies have taken place where deep learning-based approaches are being proposed, there is still room for improvement to develop better and more ...

  21. Real-Time Machine Learning-Based Driver Drowsiness Detection Using

    According to the literature, drowsiness detection systems can be grouped into three categories based on the measures that are used to detect the drowsiness signs [2 ... Adji T.B. A review on driver drowsiness based on image, bio-signal, and driver behavior; Proceedings of the IEEE 2017 3rd International Conference on Science and Technology ...

  22. PDF A Review: Driver Drowsiness Detection System

    The percentage of eyelid closure over the pupil over time (PERCLOS) is one of the major methods for the detection of the driver's drowsiness. Physiological measurements like electroencephalogram (EEG), electrocardiogram (ECG) [4], capturing eye closure, facial features [5] [6], or driving performance (such as steering characteristics, lane ...

  23. (PDF) "DRIVER DROWSINESS DETECTION SYSTEM"

    DRIVER DROWSINESS DETECTION SYSTEM. ... Literature Survey. 2.1 SYSTEM REVIEW. ... Driver Drowsiness Detection Techniques: Review. December 2019. Pratyush Agarwal;