Debre Berhan University Institutional Repository


Show simple item record ABEY , BEKELE 2021-09-23T08:11:46Z 2021-09-23T08:11:46Z 2021-08-24
dc.description.abstract According to WHO reports in 2021, more than 5 percent of the world's population have a hearing disability. In Ethiopia, more than 10% of the population has listening and speech difficulties. These hearing-impaired communities can communicate by using sign language. However, they cannot converse with regular people in their day-to-day activities since normal people do not understand sign language easily. Besides, there are only a few schools in Ethiopia that support sign language, and the teaching-learning process is mainly conducted by the discussion and description of the teacher’s idea through speech. As a result, deaf students did not have equal access to education as hearing students. To solve this issue, the researcher proposes a system to automate the recognition of sign languages using deep learning. Different studies have been done to automate the recognition of sign languages using different techniques though their accuracy is insufficient. Besides, there is no prior study to combine all three communication elements (Amharic letters, numeric, and Amharic words (the most frequent words in school selected by sign language teachers) signs in one model and more emphasis has been given to only the right-hand gestures of the signer. Therefore, the researcher proposed automatic recognition of Ethiopian sign language (ESL) that combines the aforementioned contexts by using deep learning. The proposed model is composed of five major processes: preprocessing, hand and face segmentation, feature extraction, feature learning, and classification. In the preprocessing phase, the input video is converted to image frames and resize the frame to a standard size which removes the existing noise. In the hand and face segmentation, the region of interest (hand and face part of the image) is extracted using YCbCr skin color detection. In the feature extraction phase, the characteristic features are extracted using the Gabor filter. For feature learning, the convolutional neural network is applied. In the classification phase, to classify the given sign language into sixty predefined classes 68-way softmax has been applied. The proposed system is implemented using Keras on Google collab tested using a sample image dataset collected from Atse ZereaYacob Primary School. The developed model achieved an accuracy of 97% for training and 96.81% to recognize sign languages. en_US
dc.language.iso en en_US
dc.subject Sign Language Recognition, Deep Learning, CNN, Feature Extraction, Feature Learning. en_US
dc.type Thesis en_US

Files in this item

This item appears in the following Collection(s)

Show simple item record

Search DBU-IR


My Account