Abstract
The purpose of this paper is to help people with auditory and speech disabilities to communicate with others and for controlling computers and machines. This paper proposes two different methods for identifying six distinctive hand gestures and sign language for divergent environmental conditions. The first method is based on the hand feature extraction i.e., convexity defects. For that, initially, the hand region is detected by HSV skin color conversion process. Contour and convex hull of hand are extracted from the hand region. Finally, convexity defects are determined to identify the hand gestures. The second method is deep learning based YOLOv3 model that uses DARKNET-53 convolutional neural network (CNN) as its backbone. The model is trained on a large annotated dataset. Experimental results reveal that the deep-leaning method outperforms the hand feature approach and obtain 98.92% and 95.57% accuracy for deep learning and hand feature-based model respectively.