抽象的な

Neural Network Based Static Sign Gesture Recognition System

Parul Chaudhary , Hardeep Singh Ryait

Sign language is natural media of communication for the hearing and speech impaired all over the world This paper presents vision based static sign gesture recognition system using neural network. This system enables deaf people to interact easily and efficiently with normal people. The system firstly convert images of static gestures of American Sign Language into Lab color space where L for lightness and (a, b) for the color-opponent dimensions, from which skin region i.e. hand is segmented using thresholding technique. The region of interest (hand) is cropped and converted into binary image for feature extraction. Then height, area, centroid, and distance of the centroid from the origin (top-left corner) of the image are used as features. Finally each set of feature vector is used to train a used to train a feed-forward back propagation network. Experimental results showed successful recognition of static sign gestures with an average recognition accuracy of 85 % on a typical set of test images.

免責事項: この要約は人工知能ツールを使用して翻訳されており、まだレビューまたは確認されていません