IJSTR

International Journal of Scientific & Technology Research

Home About Us Scope Editorial Board Blog/Latest News Contact Us
Scopus/Elsevier
(Re-evaluation in-progress)
CALL FOR PAPERS
AUTHORS
DOWNLOADS
CONTACT

IJSTR >> Volume 9 - Issue 4, April 2020 Edition



International Journal of Scientific & Technology Research  
International Journal of Scientific & Technology Research

Website: http://www.ijstr.org

ISSN 2277-8616



An Approach To Translate Hand Gestures Into Telugu Text

[Full Text]

 

AUTHOR(S)

J SreeDevi, Dr M Rama Bai, Mohammed Maqsood Ahmed

 

KEYWORDS

Gesture Recognition, CNN, Sign Language, Telugu Text

 

ABSTRACT

Communication an indispensable part of human activities which is expressed both verbally and non-verbally is an ease for many of us but there are many like deaf and dumb for whom it is a challenge. It is a hurdle which is hampering there talent and decreasing their productivity (proficiency); Engagement witch such specially abled persons can only happen if we know the sign language which is an organized form of communication or if there is an interpreter in between; This study attempts to decrease there hurdles to some extent by building a Gesture Recognition Model through Convolutional Neural Networks(CNN) which can predict the static hand gestures made by them and convert them into Telugu text; which will be handy in of Telugu speaking states. People who don't understand sign language can easily understand what they have to say and therefore there won’t be any need of Interpreter. This study follows the American Sign Language which is the most widely used language across the globe wherein each alphabet/word is assigned a sign to communicate.

 

REFERENCES

[1]. L. Pigou et al. Sign Language Recognition Using Convolutional Neural Networks. European Conference on Computer Vision 6-12 September 2014
[2]. R. Sharma et al. Recognition of Single-Handed Sign Language Gestures using Contour Tracing descriptor. Proceedings of the World Congress on Engineering 2013 Vol. II, WCE 2013, July 3 - 5, 2013, London, U.K.
[3]. Singha, J. and Das, K. “Hand Gesture Recognition Based on Karhunen-Loeve Transform”, Mobile and EmbeddedTechnology International Conference (MECON), January 17-18, 2013, India. 365-371
[4]. M. Jeballi et al. Extension of Hidden Markov Model for Recognizing Large Vocabulary of Sign Language. International Journal of Artificial Intelligence & Applications 4(2); 35-42, 2013
[5]. P. Mekala et al. Real-time Sign Language Recognition based on Neural Network Architecture. System Theory (SSST), 2011 IEEE 43rd Southeastern Symposium 14-16 March 2011.
[6]. Y.F. Admasu, and K. Raimond, Ethiopian Sign Language Recognition Using Artificial Neural Network. 10th International Conference on Intelligent Systems Design and Applications, 2010. 995-1000.
[7]. T.Starner and A. Pentland. Real-Time American Sign Language Recognition from Video Using Hidden Markov Models. Computational Imaging and Vision, 9(1); 227-243, 1997.
[8]. Mangera, R., Senekal, F. and Nicolls, F., 2014. Cascading neural networks for upper-body gesture recognition. Avestia Publishing.
[9]. Miranda, L., Vieira, T., Martinez, D., Lewiner, T., Vieira, A.W. and Campos, M.F., 2012, August. Real-time gesture recognition from depth data through key poses learning and decision forests. In Graphics, patterns and images (SIBGRAPI), 2012 25th SIBGRAPI conference on (pp. 268-275). IEEE.
[10]. Hamid A. Jalab , 2012. Static Hand Gesture Recognition for Human Computer Interaction. Information Technology Journal.
[11]. Dardas, N.H. and E.M. Petriu, 2011. Hand gesture detection and recognition using principal component analysis. Proceedings of the IEEE International Conference on Computational Intelligence for Measurement Systems and Applications, September 19-21, 2011, Tianjin, pp: 1-6.
[12]. Shanableh, Tamer ,T. Khaled, "Arabic sign language recognition in user independent mode”, IEEE International Conference on Intelligent and Advanced Systems, 2007, pp 597-600.
[13]. Byung-woo min, Ho-sub yoon, Jung soh, Takeshi ohashi and Toshiaki jima," Visual Recognition of Static/Dynamic Gesture: Gesture-Driven Editing System”, Journal of Visual Languages & Computing Volume10,Issue3, June 1999, pp 291-309.
[14]. Li, S.Z., Yu, B., Wu, W., Su, S.Z. and Ji, R.R., 2015. Feature learning based on SAEPCA network for human gesture recognition in RGBD images. Neurocomputing, 151, pp.565-573.
[15]. Sakshi Goyal, Ishita Sharma and Shanu Sharma, , ”Sign Language Recognition System For Deaf And Dumb People”, International Journal of Engineering,vol.2, no. 4, pp. 382-387. 2013.
[16]. Oyedotun, O.K. and Khashman, A., 2017. Deep learning in vision-based static hand gesture recognition. Neural Computing and Applications, 28(12), pp.3941-3951.
[17]. Sanchez-Riera, J., Hsiao, Y.S., Lim, T., Hua, K.L. and Cheng, W.H., 2014, July. A robust tracking algorithm for 3d hand gesture with rapid hand motion through deep learning. In Multimedia and Expo Workshops (ICMEW), 2014 IEEE International Conference on (pp. 1-6). IEEE.
[18]. Tsai, C.J., Tsai, Y.W., Hsu, S.L. and Wu, Y.C., 2017, May. Synthetic Training of Deep CNN for 3D Hand Gesture Identification. In Control, Artificial Intelligence, Robotics & Optimization (ICCAIRO), 2017 International Conference on (pp. 165-170). IEEE.
[19]. Wu, D., Pigou, L., Kindermans, P.J., Le, N.D.H., Shao, L., Dambre, J. and Odobez, J.M., 2016. Deep dynamic neural networks for multimodal gesture segmentation and recognition. IEEE transactions on pattern analysis and machine intelligence, 38(8), pp.1583-1597.