Acta Scientific Computer Sciences

Research Article Volume 4 Issue 10

An LSTM Based Sign Language Recognition Model in Support of Patients with Hearing Disabilities

Ashwin Krishna Raparthy1, Bhim Prajapati1, Santhi Priya Yalla1, Vigneshkkar Ravichandran1, Enrico De Santis2 and Parisa Naraei3*

1Postgraduate AI Student, Lambton College, Canada
2Machine Learning Engineer and Chief Technology Officer, SisterPomos and Scientific Researcher, Sapienza Università di Roma, Italy
3Professor, Cestar College of Business, Health and Technology, Department of Artificial Intelligence, Lambton College, Applied Researcher, Toronto Metropolitan University Alumni, Canada

*Corresponding Author: Parisa Naraei, Professor, Cestar College of Business, Health and Technology, Department of Artificial Intelligence, Lambton College, Applied Researcher, Toronto Metropolitan University Alumni, Canada.

Received: July 26, 2022; Published: September 23, 2022

Abstract

Human beings are wonderful creations of nature. People are known for the intelligence they have. Among the various creations of human thoughts, language is also one that human beings use to communicate with each other. But for, differently abled people who cannot hear have a different language to communicate with, called sign language. This type of language is expressed using the physical articulation of the body parts. In this case, it should not be confused with the body language that normal people use. Deaf people primarily use sign language, but people who cannot speak also use this language.

With the evolution of technology and advancement in AI, computers can now process human language using NLP. In this project, we use a deep learning method to predict sign language. We are building a web app through which a sign user can interact with a normal person who cannot understand sign language. We have used a convolutional neural network (CNN) to get the key points of the human body like the face, shoulder, arm, hand, and fingers. Then we used long short-term memory (LSTM) to model the neural network. Our model can predict sign language with an accuracy of 95.8%.

Keywords: Convolutional Neural Network (CNN); Hidden Markov Model (HMM); Artificial Neural Network (ANN)

References

  1. Zafrulla Z., et al. “American sign language recognition with the kinect”. In Proceedings of the 13th international conference on multimodal interfaces (2011): 279-286.
  2. Pigou L., et al. “Sign language recognition using convolutional neural networks”. In European conference on computer vision (2014): 572-578.
  3. Li D., et al. “Word-level deep sign language recognition from video: A new large-scale dataset and methods comparison”. In Proceedings of the IEEE/CVF winter conference on applications of computer vision (2020): 1459-1469.

Citation

Citation: Parisa Naraei., et al. “An LSTM Based Sign Language Recognition Model in Support of Patients with Hearing Disabilities". Acta Scientific Computer Sciences 4.10 (2022): 34-39.

Copyright

Copyright: © 2022 Parisa Naraei., et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.




Metrics

Acceptance rate35%
Acceptance to publication20-30 days

Indexed In




News and Events


  • Certification for Review
    Acta Scientific certifies the Editors/reviewers for their review done towards the assigned articles of the respective journals.
  • Submission Timeline for Upcoming Issue
    The last date for submission of articles for regular Issues is December 25, 2024.
  • Publication Certificate
    Authors will be issued a "Publication Certificate" as a mark of appreciation for publishing their work.
  • Best Article of the Issue
    The Editors will elect one Best Article after each issue release. The authors of this article will be provided with a certificate of "Best Article of the Issue"

Contact US