Please use this identifier to cite or link to this item: https://dl.ucsc.cmb.ac.lk/jspui/handle/123456789/3506
Title: FRAMEWORK FOR SINHALA SIGN LANGUAGE RECOGNITION AND TRANSLATION USING A WEARABLE ARMBAND
Authors: SENEVIRATHNE, R.G.D.C.
MADUSHANKA, A.L.P.
WIJESEKARA, L.M.H
Keywords: Surface Electromyography (EMG)
Accelerometer
Gyroscope
Orientation
Sinhala Sign Language
Supervised Artificial Neural Network Classifier
MYO gesture Recognition Armband
Issue Date: 8-Jun-2016
Abstract: Hearing and speaking impaired community represents a major portion of the world population. The main obstacle they face in the general society is the communication difficulty with normal hearing people. Hearing and speaking impaired people use sign language and hearing people do not understand sign language. The aim of this research is to bridge this gap by proposing a framework for recognize sign language gestures and translate them in to natural language. Although previous studies have addressed functional part of the problem they lacked with usability aspects. Our approach here is to use a non-invasive wearable device to capture the Sinhala Sign Language gestures via gestural and spatial data combination. This study is based on Electromyography as gestural data source and Accelerometer, Gyroscope and Magnetometer (orientation) as spatial data sources. The sign recognition is done by Artificial Neural Network approach. This is the first study ever done in this approach using Sinhala Sign Language in Sri Lanka. This study is done by selecting Sinhala signs that represents all kinds of hand and finger gestures. They are performed by 6 different subjects in the data collection process. Then the dataset is preprocessed, cleansed and active segments were identified via a manual segmentation technique. Then feature extraction was done by obtaining Mean Absolute Value, Standard Deviation and Variance from each gestural and spatial data segments. The sign identification was done using a supervised ANN. Results of the study were evaluated in three different ways. They are recognizing signs only using gestural data, then only using spatial data and finally with combination of both gestural and spatial data. Also the model implementation is categorized in two different ways, they are person dependent framework and person independent framework (generalized version). Model implementation results showed a high accuracy of 95.0% in the gestural (EMG) and spatial (IMU) combination approach under the person independent study which is a very significant and successful achievement.
URI: http://hdl.handle.net/123456789/3506
Appears in Collections:BICT Group project (2015)

Files in This Item:
File Description SizeFormat 
Framework for Sinhala Sign Language Recognition and Translation Using a Wearable Armband .pdf
  Restricted Access
5.54 MBAdobe PDFView/Open Request a copy


Items in UCSC Digital Library are protected by copyright, with all rights reserved, unless otherwise indicated.