ISSN: 1304-7191 | E-ISSN: 1304-7205
Comparative analysis on real-time hand gesture and sign language recognition using convexity defects and YOLOv3
1Department of CSE, International Islamic University Chittagong, Chattogram, 4318, Bangladesh
Sigma J Eng Nat Sci 2024; 42(1): 99-115 DOI: 10.14744/sigma.2024.00012
Full Text PDF

Abstract

The purpose of this paper is to help people with auditory and speech disabilities to communicate with others and for controlling computers and machines. This paper proposes two different methods for identifying six distinctive hand gestures and sign language for divergent environmental conditions. The first method is based on the hand feature extraction i.e., convexity defects. For that, initially, the hand region is detected by HSV skin color conversion process. Contour and convex hull of hand are extracted from the hand region. Finally, convexity defects are determined to identify the hand gestures. The second method is deep learning based YOLOv3 model that uses DARKNET-53 convolutional neural network (CNN) as its backbone. The model is trained on a large annotated dataset. Experimental results reveal that the deep-leaning method outperforms the hand feature approach and obtain 98.92% and 95.57% accuracy for deep learning and hand feature-based model respectively.