Automatic Speech Recognition: Systematic Literature Review
Ieee account.
- Change Username/Password
- Update Address
Purchase Details
- Payment Options
- Order History
- View Purchased Documents
Profile Information
- Communications Preferences
- Profession and Education
- Technical Interests
- US & Canada: +1 800 678 4333
- Worldwide: +1 732 981 0060
- Contact & Support
- About IEEE Xplore
- Accessibility
- Terms of Use
- Nondiscrimination Policy
- Privacy & Opting Out of Cookies
A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.
Help | Advanced Search
Electrical Engineering and Systems Science > Audio and Speech Processing
Title: end-to-end speech recognition: a survey.
Abstract: In the last decade of automatic speech recognition (ASR) research, the introduction of deep learning brought considerable reductions in word error rate of more than 50% relative, compared to modeling without deep learning. In the wake of this transition, a number of all-neural ASR architectures were introduced. These so-called end-to-end (E2E) models provide highly integrated, completely neural ASR models, which rely strongly on general machine learning knowledge, learn more consistently from data, while depending less on ASR domain-specific experience. The success and enthusiastic adoption of deep learning accompanied by more generic model architectures lead to E2E models now becoming the prominent ASR approach. The goal of this survey is to provide a taxonomy of E2E ASR models and corresponding improvements, and to discuss their properties and their relation to the classical hidden Markov model (HMM) based ASR architecture. All relevant aspects of E2E ASR are covered in this work: modeling, training, decoding, and external language model integration, accompanied by discussions of performance and deployment opportunities, as well as an outlook into potential future developments.
Submission history
Access paper:.
- Other Formats
References & Citations
- Google Scholar
- Semantic Scholar
BibTeX formatted citation
Bibliographic and Citation Tools
Code, data and media associated with this article, recommenders and search tools.
- Institution
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .
Speech emotion recognition methods: A literature review
- Split-Screen
- Article contents
- Figures & tables
- Supplementary Data
- Peer Review
- Open the PDF for in another window
- Reprints and Permissions
- Cite Icon Cite
- Search Site
Babak Basharirad , Mohammadreza Moradhaseli; Speech emotion recognition methods: A literature review. AIP Conf. Proc. 3 October 2017; 1891 (1): 020105. https://doi.org/10.1063/1.5005438
Download citation file:
- Ris (Zotero)
- Reference Manager
Recently, attention of the emotional speech signals research has been boosted in human machine interfaces due to availability of high computation capability. There are many systems proposed in the literature to identify the emotional state through speech. Selection of suitable feature sets, design of a proper classifications methods and prepare an appropriate dataset are the main key issues of speech emotion recognition systems. This paper critically analyzed the current available approaches of speech emotion recognition methods based on the three evaluating parameters (feature set, classification of features, accurately usage). In addition, this paper also evaluates the performance and limitations of available methods. Furthermore, it highlights the current promising direction for improvement of speech emotion recognition systems.
Citing articles via
Publish with us - request a quote.
Sign up for alerts
- Online ISSN 1551-7616
- Print ISSN 0094-243X
- For Researchers
- For Librarians
- For Advertisers
- Our Publishing Partners
- Physics Today
- Conference Proceedings
- Special Topics
pubs.aip.org
- Privacy Policy
- Terms of Use
Connect with AIP Publishing
This feature is available to subscribers only.
Sign In or Create an Account
Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser .
Enter the email address you signed up with and we'll email you a reset link.
- We're Hiring!
- Help Center
LITERATURE SURVEY – SPEECH RECOGINATION AND PREPROCESSING
Automatic speech Recognition(ASR) is more important in the information and communication technology. ASR is used to fill a form, Frist Spoken can be converted into text(STT). The audio noise can be removed by MCFF Algorithm. The audio can be converted into corresponding English characters by using Dynamic Bayesian Network (DBN) and then it can be visualized in the Form format. It will recognize all Accents using learning method and stored in Knowledge base. People who are uneducated and blindfor them it would be helpful. Their audio can be captured from the environment and then the noise can be removed from the audio. After that speech can be coded as characters. Atlast From will be generated in Format.
Related Papers
IJESRT Journal
This paper presents the approach for Hindi fruit name recognizer system. Every person has its uniqueness in his speech. So in this approach the database speech samples are collected from different 20 speakers with two iterations. These recordings are used to train by acoustic model. This model is trained on 20 speaker database having vocabulary size is 45 words. HTK toolkit is used to train the input data and evaluation of the results. The proposed system gives a recognition rate of 94.28% for sentence and 98.09 for word level. KEYWORDS: HMM (Hidden Markov Model), ASR (Automatic Speech Recognition), Speech Recognition (SR). MFCC (Mel Frequency Cepstral Coefficient)
Abhishek Thakur
Naveen Kumar
This paper presents a study of automatic speech recognition system for Hindi utterances with regional Indian accents. In paper [3] we have designed matlab based ASR and control system for eight English key words by using simple rule base. This rule base algorithm is the beginning stage for Key Word recognition. In paper [4] we have designed Design of Hindi Key Word Recognition System for Home Automation System Using MFCC and DTW. Features of the speech signal are extracted in the form of MFCC coefficients and Dynamic Time Warping (DTW) has been used as features matching techniques. The recognition results are tested for clean and noisy test data. Average accuracy for clean data is 97.50 % while that for noisy data is 91.25 %. We face problem in noise environment to detect correct utterance now we are going to review different papers and find out different techniques to design our ASR control system for Hindi Key Words using MFCC and DTW in noise environment.
International Journal of Electrical and Computer Engineering (IJECE)
Hilman F. Pardede
In many applications, speech recognition must operate in conditions where there are some distances between speakers and the microphones. This is called distant speech recognition (DSR). In this condition, speech recognition must deal with reverberation. Nowadays, deep learning technologies are becoming the the main technologies for speech recognition. Deep Neural Network (DNN) in hybrid with Hidden Markov Model (HMM) is the commonly used architecture. However, this system is still not robust against reverberation. Previous studies use Convolutional Neural Networks (CNN), which is a variation of neural network, to improve the robustness of speech recognition against noise. CNN has the properties of pooling which is used to find local correlation between neighboring dimensions in the features. With this property, CNN could be used as feature learning emphasizing the information on neighboring frames. In this study we use CNN to deal with reverberation. We also propose to use feature t...
Mayank Dave
Bert Cranen
Automatic speech processing systems are employed more and more often in real environments. Although the underlying speech technology is mostly language independent, differences between languages with respect to their structure and grammar have substantial effect on the recognition systems performance. In this paper, we present a review of the latest developments in the sign language recognition research in general and in the Arabic sign language (ArSL) in specific. This paper also presents a general framework for improving the deaf community communication with the hearing people that is called SignsWorld. The overall goal of the SignsWorld project is to develop a vision-based technology for recognizing and translating continuous Arabic sign language ArSL.
Gaurav Leekha
delowar sikder
The Bangla language is the seventh most spoken language, with 265 million native and non-native speakers worldwide. However, English is the predominant language for online resources and technical knowledge, journals, and documentation. Consequently, many Bangla-speaking people, who have limited command of English, face hurdles to utilize English resources. To bridge the gap between limited support and increasing demand, researchers conducted many experiments and developed valuable tools and techniques to create and process Bangla language materials. Many efforts are also ongoing to make it easy to use the Bangla language in the online and technical domains. There are some review papers to understand the past, previous, and future Bangla Natural Language Processing (BNLP) trends. The studies are mainly concentrated on the specific domains of BNLP, such as sentiment analysis, speech recognition, optical character recognition, and text summarization. There is an apparent scarcity of re...
In previous work we have shown that an ASR system consisting of a dual-input DBN which simultaneously observes MFCC acoustic features and predicted phone labels that are generated by an exemplar-based Sparse Classification (SC) system can achieve better word recognition accuracies in noise than a system observing only one of those input streams. This paper explores two modifications of the SC input to further improve the noise robustness of the dual-input DBN system: 1) integrating more time context and 2) using N best states. Experiments on AURORA-2 reveal that the first approach significantly improves the recognition results at almost all SNRs, but particularly in the more noisy conditions, achieving up to 6.1% (absolute) accuracy gain at SNR −5 dB. The second modification shows that there is an optimal N which allows the maximum attainable accuracy to be even further improved with another 11.8% at −5 dB.
RELATED PAPERS
Journal of Computer Science IJCSIS
Tuomas Virtanen , Felix Weninger
Neural Computing and Applications
ismail shahin
IRJET Journal
Konstantin Markov
International Journal of Advance Research in Computer Science and Management Studies [IJARCSMS] ijarcsms.com
khalid Daoudi
vishal pasricha , Mayank Dave
Ahmad Yaar Zadran
Artificial Intelligence Review
Shima Tabibian
Asmaa EL HANNANI
Ijetrm Journal
Srikanth Bomma
International Journal of Engineering Sciences & Research Technology
Ijesrt Journal
SIDDHARTH S U K H D E V MORE
IOSR Journals publish within 3 days
Frank Rudzicz
Eslam ElMaghraby
International Journal of Engineering Research and
Archana Kadam
Proc. International Symposium on Natural …
Hasnat Md. Abul
Telecommunication Systems
International Conference on Acoustics, Speech, and Signal Processing
Björn Schuller
nadia khizar
International Journal of Engineering Research and Technology (IJERT)
IJERT Journal
Samir Rustamov
Computer Applications in Engineering Education
Ramzi Saifan
deepak gopinath
Elizabeth Sherly
International Journal of Computer Applications
Ratnadeep Deshmukh
ACM Transactions on Speech and Language Processing
Anton Batliner
pooja prajapati
INTERNATIONAL JOURNAL OF ADVANCE RESEARCH, IDEAS AND INNOVATIONS IN TECHNOLOGY
Ijariit Journal
Multimedia Systems
Anwar Hossain
International Journal of Advanced Computer Science and Applications
bousselmi souha
RELATED TOPICS
- We're Hiring!
- Help Center
- Find new research papers in:
- Health Sciences
- Earth Sciences
- Cognitive Science
- Mathematics
- Computer Science
- Academia ©2024
ICDSMLA 2019 pp 745–752 Cite as
A Study on Sign Language Recognition-A Literature Survey
- P. V. Naresh 37 ,
- R. Visalakshi 37 &
- B. Satyanarayana 38
- Conference paper
- First Online: 19 May 2020
177 Accesses
1 Citations
Part of the book series: Lecture Notes in Electrical Engineering ((LNEE,volume 601))
This survey primarily focuses on the recent development of technologies being used for hearing loss and speech loss people to converse easily and frequently with usual people. The work so far gone for the development introduced in various technologies and techniques such as smart gloves, android application, techniques such as Convolution Neural Networks, Gaussian filtering, HMM, speech to text, video to Text then to Speech etc. The system was tested with SSVM classifier to check the accuracy with 10 sample gestures and found that the average recognition rate was 95.4% accuracy. Hence I conclude based on the accuracy obtained in test results my future work will continue on SSVM classifier.
- S ign language
- Hand gesture
- Speech recognition
- Deaf and dumb
* No academic titles or descriptions of academic positions should be included in the addresses.
The affiliations should consist of the author’s institution, town/city, and country.
This is a preview of subscription content, log in via an institution .
Buying options
- Available as PDF
- Read on any device
- Instant download
- Own it forever
- Available as EPUB and PDF
- Compact, lightweight edition
- Dispatched in 3 to 5 business days
- Free shipping worldwide - see info
- Durable hardcover edition
Tax calculation will be finalised at checkout
Purchases are for personal use only
Reda MM, Mohammed NG, Azeem RAR, Seoud A (2018) SVBicomm signs voice bidirectional communication systems for normal, deaf and dumb and blind people based on machine learning. In: 1st international conference on computer application and information security (ICCAIS). IEEE, Riyadh
Google Scholar
Kumar S, Yadav G, Singh HP, Malhotra S, Gupta A (2018) SSVM clsssifier and hand gesture based sign language recognition. In: International conference on intelligent circuits and systems (ICICS). Phagwara
Tao W, Leu MC, Yin Z (2018) ASL alphabets recognition using conventional neural networks with multiview augmentation and inference fusion. Eng Appl Artif Intell 76:202–213
Ananth Rao G, Kishore PVV (2017) Selfie video based continuous Indian signs language recognition system. Ain Shams Eng J 9(4):1929–1939
Nath GG, Anu VS (2017) Embedded sign language interpreter system for deaf and dumb people. In: International conference on innovations in information embedded and communication systems (ICIIECS). IEEE, Coimbatore
ElBadawy M, Elons AS, Shedeed HA, Toolbar MF (2017) Arabic sign language recognition with 3D convolutionals neural networks. In: 8th international conference on intelligent computing and information systems (ICICIS). IEEE, Cairo
Patel U, Ambekar AG (2017) Moment based signs language recognition for Indian language. In: International conference on computing, communications, control and automation (ICCUBEA). IEEE, Pune
Gupta B, Shukla P, Mittal A (2016) K-nearest correlated neighbor’s classifications for indian sign language gesture recognition using feature fusion. In: International conference on computer communication and informatics (ICCCI). IEEE, Coimbatore
Abdulla D, Abdulla S, Manaf R, Anwar, Jardal H (2016) Design and implementation of sign-to-speech-text system for deaf and dumb people. In: 5th international conference on electronic devices, systems and applications (ICEDSA). IEEE, Ras Al Khaimah
Abdallah EE, Fayyoumi E (2016) Assistive technology for deaf people based on android Platform. Procedia Comput Sci 94:295–301
Hummadi FN, Al-Nuaimy (2017) Design and implementation of deaf and mute people interaction system. In: International conference on engineering and technology (ICET). IEEE, Antalya
El- Gayyar MM, Ibrahim AS, Wahed ME (2016) Translation from arabic speech to arabic signs language based on cloud computing. Egypt Inf J 17(3):295–303
Chen Y, Zhan W (2016) Research and implementation of sign language recognition method based on Kinect. In: 2nd international conference on computers and communications. IEEE, Chengdu
Hikawa H, Kaida K (2015) Novel FGPA implementation of sign recognition systems with som–hebb classifier. IEEE Trans Circuits Syst Video Technol 25(1):153–166
Tubaiz N, Shanableh T, Assaleh K (2015) Glove-based continuous Arabic sign language recognition in user-dependent mode. IEEE Trans Human-Mach Syst 45(4):526–533
Liang, W. Guixi L, Hongyan D (2015) Dynamic combine gestures recognition based on multi-feature fusion in a complex environment. J China Univ Posts Telecommun 22(2):81–88
Agarwal SC, Jalal AS, Bhatnagar C (2012) Recognition of Indian sign language using feature fusion. In: 4th international conference on intelligent human computer interaction (IHCI). IEEE, Kharagpur, pp 1–5
Nikam AS, Ambekar AG (2016) Sign language recongization using image based hand gesture recognition techniques. In: 3rd international conference on innovation in information, embedded and communication system (IC-GET). IEEE, Coimbatore
Bhuyan MK, Ghoah D, Bora PK (2006) A frame work of hand gesture recognition with applications to sign language. In: Annual IEEE india conference. IEEE, New Delhi, pp 1–6
Bauer B, Hienz H, Kraiss K-F (2000) Video-based continuous sign language recognition using statistical methods. 15th international conference on pattern recognition (ICPR). IEEE, Barcelona, pp 463–466
The Bangla Sign Language Community, the Deaf of Bangladesh, People and Language Detail Profile, Profile Year: 2012
Fang G, Gao W, Zhao D (2007) Large-vocabulary continuous sign language recognition based on transition-movement models. IEEE Syst Man Cybern Part A Syst Humans IEEE Trans 37(1):1–9
Ghotkar AS, Khatal R, Khupase S, Asati S, Hadap M (2012) Hand gesture recognition for Indian sign language. IEEE international conference on computer communication informatics, pp 1–4
Kumar MG, Gurjar MK, Singh MS (2016) American sign language translating glove using flex sensor. Imp J Interdiscip Res 2(6)
Mohandes M, Aliyu S, Deriche M (2014) Arabic sign language recognition using the leap motion controller. 23rd international symposium on industrial electronics (ISIE). Istanbul, IEEE, pp 960–965
Pankajakshan PC, Thilagavathi B (2015) Sign language recognition system. In: IEEE sponsored 2nd international conference on innovations in information embedded and communication systems (ICIIECS). IEEE, Coimbatore
Ravikiran J, Mahesh K, Mahishi S, Dheeraj R, Sudheender S, Pujari NV (2009) Finger detection for sign language recognition. In: Proceedings of the international multiconference of engineers and computer scientists, vol 1. Hong Kong
Thepade S, Narkhede A, Kelvekar P (2013) Novel technique for background removal from sign images for sign language recognition system. Int J Comput Appl 78(10), https://doi.org/10.5120/13523-1203
Starner T, Pentland A (1998) A real-time american sign language recognition using desk and wearable computer based video. IEEE Trans Pattern Anal Mach Intell 20(12):1371–1375
Singha J, Das K (2013) Indian sign language recognition using Eigen value weighted euclidean distance based classification technique. Int J Adv Comput Sci Appl (IJACSA) 4(2)
ZahoorZafrulla (2014) Automatic recognition of american sign language, PhD Dissertation, Georgia Institute of Technology, USA
Download references
Author information
Authors and affiliations.
Department of Computer Science & Engineering, Annamalai University, Annamalai Nagar, Chennai, India
P. V. Naresh & R. Visalakshi
Department of Computer Science & Engineering, CMR Institute of Technology, Hyderabad, India
B. Satyanarayana
You can also search for this author in PubMed Google Scholar
Corresponding author
Correspondence to P. V. Naresh .
Editor information
Editors and affiliations.
BioAxis DNA Research Centre Private Ltd, Hyderabad, Telangana, India
Polish Academy of Science, Systems Research Institute, Warsaw, Poland
Marcin Paprzycki
Department of Computer Science and Engineering, CMR Institute of Technology, Hyderabad, Telangana, India
Vinit Kumar Gunjan
Rights and permissions
Reprints and permissions
Copyright information
© 2020 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper.
Naresh, P.V., Visalakshi, R., Satyanarayana, B. (2020). A Study on Sign Language Recognition-A Literature Survey. In: Kumar, A., Paprzycki, M., Gunjan, V. (eds) ICDSMLA 2019. Lecture Notes in Electrical Engineering, vol 601. Springer, Singapore. https://doi.org/10.1007/978-981-15-1420-3_80
Download citation
DOI : https://doi.org/10.1007/978-981-15-1420-3_80
Published : 19 May 2020
Publisher Name : Springer, Singapore
Print ISBN : 978-981-15-1419-7
Online ISBN : 978-981-15-1420-3
eBook Packages : Intelligent Technologies and Robotics Intelligent Technologies and Robotics (R0)
Share this paper
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Publish with us
Policies and ethics
- Find a journal
- Track your research
IMAGES
VIDEO
COMMENTS
A huge amount of research has been done in the field of speech signal processing in recent years. In particular, there has been increasing interest in the automatic speech recognition (ASR) technology field. ASR began with simple systems that responded to a limited number of sounds and has evolved into sophisticated systems that respond fluently to natural language. This systematic review of ...
The literature on noisy speech emotion recognition has identified three main approaches to processing noisy speech emotion data for classification or regression analysis. The first is speech enhancement pre-processing to improve or remove noise from raw speech signals or calculated features.
ASR can be defined as the process of deriving the. transcription of speech, known as a word sequence, in which. the focus is on the shape of the speech wave [1]. In actuality, speech recognition ...
1.2 Our contribution. In this section, we will discuss how our research work is different from the earlier SR surveys. Most of the recent surveys have targeted specific areas in speech recognition like ASR survey on [], Acoustic-Phonetic Analysis [], Mandarin speech recognition [], Speech emotion recognition [44, 45], speech denoising methods [], ML and DL Applications [], state-of-the-art ...
Abstract. Nowadays emotion recognition from speech (SER) is a demanding research area for researchers because of its wide real-life applications. There are many challenges for SER systems such as the availability of suitable emotional databases, identification of the relevant feature vector, and suitable classifiers.
Recently great strides have been made in the field of automatic speech recognition (ASR) by using various deep learning techniques. In this study, we present a thorough comparison between cutting-edged techniques currently being used in this area, with a special focus on the various deep learning methods. This study explores different feature extraction methods, state-of-the-art classification ...
The speech-emotion recognition (SER) field became crucial in advanced Human-computer interaction (HCI). ... It shows that most previous survey papers are literature reviews and a few attempts to do a systematic literature review for SER, and all covered the main parts of SER. However, there are some limitations in covering the preprocess ...
Title: End-to-End Speech Recognition: A Survey Authors: Rohit Prabhavalkar , Takaaki Hori , Tara N. Sainath , Ralf Schlüter , Shinji Watanabe Download a PDF of the paper titled End-to-End Speech Recognition: A Survey, by Rohit Prabhavalkar and 4 other authors
techniques within the domain of Speech Recognition through a systematic literature review of the existing work. We will introduce the most significant and relevant techniques that may provide some directions in the future research. Key words: Automatic Speech Recognition (ASR), End-to-End Systems, Hybrid Systems, Low Resource Language.
The purpose of this report was to review the existing literature on speech-to-text as an accom-modation within classroom and testing settings. Due, in part, to the speed of technology changes related to this type of accommodation, literature reviewed was limited to studies published within the past 10 years.
Automatic speech recognition: a survey. March 2021; Multimedia Tools and Applications 80(3):1-47; ... Literature. Filter the. Acquired Literature. Read and Classify the. Articles. Amalgamate the.
This paper analysis the accuracy of feature extraction based on modeling which is implemented using MFCC and HMM for two different type connected and continuous speech. The recognition result shows that the overall system accuracy for connected word is 69.22 % and continuous word is 50%.
Speech emotion recognition methods: A literature review. January 2017; AIP Conference Proceedings 1891(020105):020105-1 - 020105-7 ... This paper is a survey of speech emotion classification ...
Selection of suitable feature sets, design of a proper classifications methods and prepare an appropriate dataset are the main key issues of speech emotion recognition systems. This paper critically analyzed the current available approaches of speech emotion recognition methods based on the three evaluating parameters (feature set ...
Among the other modes of communication, such as text, body language, facial expressions, and so on, human beings employ speech as the most common. It contains a great deal of information, including the speaker's feelings. Detecting the speaker's emotions from his or her speech has shown to be quite useful in a variety of real-world applications. The dataset development, feature extraction ...
Nowadays emotion recognition from speech (SER) is a demanding research area for researchers because of its wide real-life applications. There are many challenges for SER systems such as the availability of suitable emotional databases, identification of the relevant feature vector, and suitable classifiers.This paper critically analysed the literature on SER in terms of speech databases ...
Literature Survey: Sign Language Recognition Using Gesture Recognition and Natural Language Processing ... The POS tagging stage creates parts-of-speech tags: noun (N), preposition (Prep), pronoun, adjective (Adj), verb (V), adverb (Adv), etc. using WordNet Algorithm for POS tagging. Further grammar designing is done by defining rules for ...
This survey reveals that the shortage of freely available continuous speech corpora deserves more research attention in this domain, and shows a need to compile large corpora or a benchmark, as it will be a key factor to promote the Arabic language research for effective human-computer interaction. Speech recognition poses some interesting challenges such as varying acoustic conditions ...
LITERATURE SURVEY - SPEECH RECOGINATION AND PREPROCESSING. gokula krishnan. Automatic speech Recognition (ASR) is more important in the information and communication technology. ASR is used to fill a form, Frist Spoken can be converted into text (STT). The audio noise can be removed by MCFF Algorithm.
Speech emotion recognition could be considered a new topic in speech processing where he plays that plays an essential role in human interaction. Emotions are a king of speech that recognizes the three significant aspects of designing the speech emotion recognition system. This article reviews the work on speech emotion recognition and is helpful for further research. Firstly, speech emotion ...
works of literature throughout your life. I believe that enthusiasm for literature and acquired expertise in analyzing it should go hand-in-hand. There will also be ample opportunity in this class to work on your writing skills. English 200 emphasizes close reading, critical analysis, and recognition of literary genres and terms.
In this literature survey it has been concluded that there are many techniques discovered for the progress of speech recognition for hearing impaired (deaf) and speechless (mute) people and common people, the common techniques found such as image acquisition, skin segmentation, background subtraction and applying canny edge detection or sobel edge detection applying feature extraction and ...