Titre | Electroencephalographic based brain computer interface for unspoken speech |
Type de publication | Communication |
Type | Communication avec actes dans un congrès |
Année | 2017 |
Langue | Anglais |
Date du colloque | 12-14/09/2017 |
Titre du colloque | Senset 2017. International Conference on Sensors, Networks, Smart and Emerging Technologies |
Titre des actes ou de la revue | 2017 Sensors Networks Smart and Emerging Technologies (SENSET) |
Pagination | 1 - 4 |
Auteur | Abdallah, Nassib, Daya, Bassam, Khawandi, Shadi, Chauvet, Pierre |
Pays | Liban |
Editeur | IEEE |
Ville | Beyrouth |
Mots-clés | articial neural network, Biological neural networks, Brain Computer Interface methodology, brain-computer interfaces, data classification, database construction, Databases, electroencephalographic based brain computer interface, Electroencephalography, English words, feature extraction, features extraction, features vectors, neural nets, noise elimination methodology, signal classification, speech recognition, unspoken speech recognition |
Résumé en anglais | This paper presents a Brain Computer Interface methodology for unspoken speech recognition based on Electroencephalography (EEG). Each phase within this approach is presented and discussed, followed by the noise elimination methodology and ends up by features extraction and data classification. The presented work consists of database construction with features vectors that will be classified into different classes by applying an articial neural network with three layers. The proposed approach provides results with high percentage of recognition (93% Testing, 95% Validation) when applied on two English words ON/OFF acquired from 2 different resources. |
URL de la notice | http://okina.univ-angers.fr/publications/ua17060 |
DOI | 10.1109/SENSET.2017.8125026 |