抽象的な

LIP MOTION SYNTHESIS USING PRINCIPAL COMPONENT ANALYSIS

Disha George, Yogesh Rathore

Current studies states that not only audio but also video signs are delivering information on speech recognition. This feature can be used as a supplementary in the field of animation and lip motion reading for the enhancement of speech recognition. It has gained a wide attention in audio-visual speech recognition (AVSR) due to its potential applications. This research is divided into two-phases: (i) Firstly, taking frames and extracting features to be kept in database as standard. (ii) Secondly, having test image samples to be trained in neural network to check the alphabet spoken by recognizing what the person has spoke. Lip reading system has been developed using Principal Component Analysis using input images and 60% success has been achieved in the test phase in similar alphabets lip movements (such as u, o, q, b, e, i, l, n etc.).

免責事項: この要約は人工知能ツールを使用して翻訳されており、まだレビューまたは確認されていません