IJSTR

International Journal of Scientific & Technology Research

Home About Us Scope Editorial Board Contact Us
CONTACT

IJSTR >> Volume 9 - Issue 4, April 2020 Edition



International Journal of Scientific & Technology Research  
International Journal of Scientific & Technology Research

Website: http://www.ijstr.org

ISSN 2277-8616



Human Computer Interaction Through Hand Gesture Recognition Technology

[Full Text]

 

AUTHOR(S)

Nikhat Parveen, Arpita Roy, D. Sai Sandesh, J.Y.P.R.Sai Srinivasulu, N.Srikanth

 

KEYWORDS

(deaf, dumb, gestures, HGR, SVM, HCI, SIFT, HOG)

 

ABSTRACT

Communication is the main channel to interact between individuals with each other. Due to birth defects, accidents and oral disorders, there has been a drastic increase in the number of deaf and dumb victims in recent years. Since deaf and dumb people are unable to communicate with ordinary people, they must rely on some kind of visual communication. Throughout the world, most languages are spoken and interpreted. The people, that is, those who find it difficult to speak and hear "The Dumb" and "The Deaf," It's hard to understand exactly what the other person is trying to communicate with the deaf, and so on. Key recognition issues for hand gesture are consistent with the complexities of the gesture process. Also offered methods for evaluating the process of identifying recent postures and gestures. Hand gesture recognition is becoming an increasingly popular field of research in human computer interaction. Considering the similarity of human hand form with four fingers and a thumb, this paper aims to present a real-time hand gesture recognition system focusing on recognizing some important shape-based features such as orientation, center of mass (centroid), finger position, thumb in terms of raised or folded hand fingers and their respective location in the image. The solution in this paper depends entirely on the shape parameters of the hand gesture. This does not include any other means of identification of hand gesture such as skin color, texture, as these image-based characteristics vary greatly from different light conditions to other variables. This basic form-based approach to hand gesture recognition proposed in this paper can identify around 1-5 different gestures based on the performance of this algorithm. This proposed implemented algorithm was evaluated over 50-60 images and provides approximately 94 percent recognition level. However, the recognition rate remains to be improved at the identification level.

 

REFERENCES

[1] Yu-Hu, Yongkang Wong(Member of IEEE), Embedded Digital Hand Poses And Adversarial Training, Aug 2019.
[2] Zhngjie wang, Yushan Hou, Kangkang Jiang, Wenwen Dou, Chenming Zhang, Zehu Huang, and Yinjing Guo, “Recognition based on Active sonic sensing” Aug 26,2019.
[3] Peijun Bao, Ana L Maqueda, Carlos R. del-Blanco and Narciso Garci, “Tiny Gestures without localization, a deep convolutional network”, 2017 IEEE.
[4] Xiaoliang Zhang, Ziqi Yang, Taiyu chen, Diliang chen, Ming-chun Huang, “Cooperative Sense and Wearable Computing for Sequential Recognition”, 2018 IEEE.
[5] Shinde, Shweta S., M. Autee Rajesh, and K. Bhosale Vitthal. "Real time two way communication approach for hearing impaired and dumb person based on image processing." International IEEE Conference 2016 on Computational Intelligence and Computing Research (ICCIC). IEEE, 2016.
[6] Sood, Anchal, Mishra and Anju. ' AAWAAZ: a deaf and dumb communication system. ‘Reliability, Infocom Technology and Optimization (Trends and Future Directions)(ICRITO), 2016 5th International Conference. IEEE, this year.
[7] Recognition of static and adaptive hand management in detail by Guillaume Plouffe and Ana-Maria Cretu, Member of the IEEE 2015.
[8] Computing Communication Control and Automation (ICCUBEA), 2015 International Conference on "Two Ways Communicator between Deaf and Dumb People and Normal People."
[9] R. K., V. Valliammai, and S. Shangeetha. Padmavathi. "Indian Sign Language Character Recognition computer vision-based approach." Machine Vision and Image Processing (MVIP), 2012 International Conference on. IEEE, 2012.
[10] Kumari, Sonal, K. Mitra, Suman. Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG), 3rd National Conference on 2011. IEEE, 2011.
[11] Shariz, S. and Kulkarni, A. "Identifying Hand Con_gurations with Low Resolution Depth Sensor Data", CS229 Final Project Paper, 2009.
[12] Ruslan Kurdyumov, Phillip Ho, Justin Ng, “Sign Language Classification Using Webcam Images”, December 16, 2011.
[13] Chih-Wei Hsu, Chih-Chung Chang, and Chih-Jen LinDepartment of Computer Science National Taiwan University, Taipei106, Taiwan, Last updated: April 15, 2010.
[14] David G. Lowe, “Distinctive Image Features from ScaleInvariant Keypoints” January 5, 2004.
[15] Starner, T. and Pentland, A. "Real-Time American Sign Language Recognition Using Desk and Wearable Computer Based Video", IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 20, No. 12, December 1998.
[16] Nikhat Parveen, Arpita Roy, Yagna Srilatha, Chaitanya, Hemanth kumar, “A Fuzzy Logic Based System for Secure Scrum” Ijrte, November 2019.
[17] Arpita Roy, Nikhat Parveen, P.Ramya Bhargavi, A.Navya, A.Pavan Kumar,”Disciplinary Control System Based on Naïve Bayes Classification Technique using JSP Servlets”, December 2019.