dc.description.abstract |
The deaf/dumb people cannot exchange their thoughts and ideas using words. They face many difficulties because of that in their day-to-day life. They use sign language to communicate with others-—no standard sign language for mute people since it differs from place to place. Therefore, simple technology is necessary that helps deaf or dumb people communicate with ordinary people without any barrier. In recent years, researchers are interested in sign language recognition and convert them into text or voice using vision-based and sensor-based approaches. However, a significant performance gap is still observed in these approaches due to the movement of hands at different speeds, real-time recognition, and background and illumination changes. Especially there is no accurately completed research for Sinhala sign language recognition. This research aims to recognize static Sinhala sign language gestures and translate them into natural language using deep learning algorithms. An efficient convolutional neural network (CNN) architecture is presented to obtain high-quality features from the 2D static hand gesture images to classify 12 different gestures. MobileNets, a class of lightweight deep convolutional neural networks, was used to classify the hand gesture images, and the development was implemented on Keras deep learning library with TensorFlow backend. The proposed method efficiently shows discriminative static hand gesture images and offers an accurate sign language detection classifier. The proposed MobileNets CNN architecture model-based approach achieves 96% prediction accuracy for the 240 images from the 12 static gestures, which means that our prediction model is more robust and accurate compared to the previous approaches. This work explores the benefits of deep learning by recognizing a subset of signs of Sinhala sign language, and the recognition bridges the gap of communication between deaf or dumb people and others. |
en_US |