Static Sinhala sign language recognition using MobileNet convolutional neural network

Show simple item record

dc.contributor.author Jagodage, J.P.T.
dc.contributor.author Umesh, E.R.
dc.date.accessioned 2022-08-17T06:56:42Z
dc.date.available 2022-08-17T06:56:42Z
dc.date.issued 2021-09-15
dc.identifier.uri http://drr.vau.ac.lk/handle/123456789/312
dc.description.abstract The deaf/dumb people cannot exchange their thoughts and ideas using words. They face many difficulties because of that in their day-to-day life. They use sign language to communicate with others-—no standard sign language for mute people since it differs from place to place. Therefore, simple technology is necessary that helps deaf or dumb people communicate with ordinary people without any barrier. In recent years, researchers are interested in sign language recognition and convert them into text or voice using vision-based and sensor-based approaches. However, a significant performance gap is still observed in these approaches due to the movement of hands at different speeds, real-time recognition, and background and illumination changes. Especially there is no accurately completed research for Sinhala sign language recognition. This research aims to recognize static Sinhala sign language gestures and translate them into natural language using deep learning algorithms. An efficient convolutional neural network (CNN) architecture is presented to obtain high-quality features from the 2D static hand gesture images to classify 12 different gestures. MobileNets, a class of lightweight deep convolutional neural networks, was used to classify the hand gesture images, and the development was implemented on Keras deep learning library with TensorFlow backend. The proposed method efficiently shows discriminative static hand gesture images and offers an accurate sign language detection classifier. The proposed MobileNets CNN architecture model-based approach achieves 96% prediction accuracy for the 240 images from the 12 static gestures, which means that our prediction model is more robust and accurate compared to the previous approaches. This work explores the benefits of deep learning by recognizing a subset of signs of Sinhala sign language, and the recognition bridges the gap of communication between deaf or dumb people and others. en_US
dc.language.iso en en_US
dc.publisher Faculty of Applied Science en_US
dc.subject Convolutional neural network en_US
dc.subject Hand gestures en_US
dc.subject MobileNet en_US
dc.subject Sinhala sign language en_US
dc.title Static Sinhala sign language recognition using MobileNet convolutional neural network en_US
dc.type Conference paper en_US
dc.identifier.proceedings FARS 2021 en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search


Browse

My Account