Yazarlar (3) |
![]() Kastamonu Üniversitesi, Türkiye |
![]() Kastamonu Üniversitesi, Türkiye |
![]() Kastamonu Üniversitesi, Türkiye |
Özet |
Sign language recognition is used to help communicate effectively between normal hearing peoples and hearing-impaired. According to literature review, Turkish sign language recognition studies are very few. For this reason, this study has been performed on Turkish sign language recognition. Depth cameras, such as the Leap Motion controller, allows the researchers to exploit depth knowledge to better understand hand movements. In this study, data of 10 letters in Turkish sign language was taken from Leap Motion. Five of these data are composed of letters (I, C, L, V, O) that It can be expressed with one hand, while the other five are composed of letters (B, D, M, N, K) that It can be expressed with two hands. The dataset was taken by two different people. Each person made five trials for each letter. Ten samples were taken at each trial. In this study, Artificial Neural Network, Deep Learning and Decision Tree based models were designed and the effectiveness of these models in recognizing the Turkish sign language is evaluated. Regression (R), Mean Square Error (MSE) and Estimation Accuracy performance metrics are used to evaluate models' performance. The data set was randomly divided into 30% for training and 70% for testing. According to the experimental results, the most successful models for the data set with 120 features are decision tree and DNN models. For the data set with 390 features, DNN is the most successful model. |
Anahtar Kelimeler |
Bildiri Türü | Tebliğ/Bildiri |
Bildiri Alt Türü | Tam Metin Olarak Yayınlanan Tebliğ (Uluslararası Kongre/Sempozyum) |
Bildiri Niteliği | Alanında Hakemli Uluslararası Kongre/Sempozyum |
Bildiri Dili | İngilizce |
Kongre Adı | International Conference on Advanced Technologies, Computer Engineering and Science (ICATCES’18) |
Kongre Tarihi | 11-05-2018 / 13-05-2018 |
Basıldığı Ülke | Türkiye |
Basıldığı Şehir | Safranbolu |