
In the modern era, sign language plays a vital role in facilitating communication for hearing and speech-impaired individuals. However, a communication barrier still exists between sign language users and the general population. In this work, we propose a real-time sign language recognition system HandspeakNet, a deep learning framework using a convolutional neural network (CNN) architecture. The objective is to classify hand gestures corresponding to sign language alphabets with high accuracy using image data. The system is trained and tested on American sign language datasets, demonstrating significant potential for use in assistive technologies and human-computer interaction. The proposed system achieves a ~ 5% improvement in recognition accuracy over current state-of-the-art methods.
Download PDF: https://soalanb.eu.org/KN4WyQ