1 / 1

THINK AND TYPE: DECODING EEG SIGNALS FOR A BRAIN-COMPUTER INTERFACE VIRTUAL SPELLER

Text output of the speller. Cue provided to the user to start / stop performing motor imagery. Grid of buttons which consists of letters and predicted words with a current highlighted row and column. THINK AND TYPE: DECODING EEG SIGNALS FOR A BRAIN-COMPUTER INTERFACE VIRTUAL SPELLER.

oona
Download Presentation

THINK AND TYPE: DECODING EEG SIGNALS FOR A BRAIN-COMPUTER INTERFACE VIRTUAL SPELLER

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Text output of the speller Cue provided to the user to start / stop performing motor imagery Grid of buttons which consists of letters and predicted words with a current highlighted row and column THINK AND TYPE: DECODING EEG SIGNALS FOR A BRAIN-COMPUTER INTERFACE VIRTUAL SPELLER Sherry Liu Jia Ni1, Joanne Tan Si Ying1, Yap Lin Hui1, Zheng Yang Chin2 and Chuanchu Wang2. [1] Nanyang Girls’ High School, 2 Linden Drive, Singapore 288683 [2]1 Fusionopolis Way, #21-01 Connexis (South Tower), Singapore 138632 Abstract Scalp brain signals or Electroencephalogram (EEG) exhibit different characteristics during different types of mental activities. These different characteristics could be classified by a Mental Activity Brain-Computer Interface (MA-BCI) (Figure 1), which allows the control of external devices using only EEG as a control input. This technology would be potentially useful for patients who are incapable of communication due to total paralysis arising from medical conditions. With the aim of fulfilling the needs of these patients, this project investigates: first, the performance of the BCI, which employs the Filter Bank Common Spatial Pattern (FBCSP) algorithm (Figure 2) in differentiating mental activities from the EEG; second, a proposed virtual speller prototype that allows its user to type words on the computer with the EEG as the input. Figure 1: BCI subject getting ready for training Methodology • 1. Designed and developed the Virtual Speller in Adobe Flash ActionScript3.0 (Figure 3 and Figure 4) • 2. Conducted experiments to determine the accuracy of the FBCSP algorithm of the 5 mental activities (MA) • Initial training session: subject performs 5 MA to train the FBCSP algorithm to classify EEG data • Training Session with BCI Visual Feedback: subject performs 400 trials of MA: 80 Left-hand (L), 80 Right-hand (R), 80 Foot (F), 80 Tongue (T) motor imageries and 80 Mental Arithmetic (AR) to determine accuracy of the computational model obtained in the initial training session. • 3. Analyzed the experiment results offline to obtain accuracies of the 5 MA • 10x10 Cross Validation (CV) to estimate accuracy of FBCSP algorithm on unseen data • Selection of 4 MA with the highest classification accuracies for proposed Virtual Speller 4. Tested Virtual Speller • Testing session with Visual Speller: subject is tasked to type ‘hello’ with and without the word predictive function to determine speller’s efficiency • Characters typed per minute to determine efficiency of the Virtual Speller Table 1: Proposed features of the Speller Figure 2: Filter Bank Common Spatial Pattern algorithm Figure 3: Screenshot of the Virtual Speller GUI Results and Discussion Table 2: 10 x 10 CV Confusion matrix for 5 classes of MA using data from initial training session • Denoting L=left hand; R=right hand, T= tongue, AR=mental arithmetic and F=foot. • 1. Classification accuracy of 10 x10 Cross Validation (CV) on initial training data • Splits data into n=10 sets, and uses k=9 sets for constructing classifier and remaining n-k sets for validation and repeats this 10 times, with different random partitions into training and validation sets. • Average classification accuracy of 10x10 CV about66.62±1.8785%, as shown in Table 2. • 2. Classification accuracy of 10x10 CV of L,R,T,F and L,R,F,AR using initial training data • Comparison of classification accuracy of L,R,T,F and L,R,F,AR was performed to determine the optimal 4 classes. Top 4 classes are L, R, F and AR. • Testing accuracies of the combinations L,R,T,F (72.50%) and L,R,F,AR (71.88%)are highly similar and not conclusive • Accuracy of 10x10 CV, with L,R,F,T having an average accuracy of 73.86±1.8762% and L, R, F, AR having that of 79.41±1.1918%. Thus, the latter was selected. • 3. Performance of Virtual Speller • The number of single trials taken by the subject to type the word “hello” is summarized in Table 3. Figure 4: Flow chart illustrating usage of the Virtual Speller Table 2: 10 x 10 CV confusion matrix for 5 classes of MA using data from initial training session Table 3: Number of trials (theoretical and actual) needed to type “hello” • 4. Analysis of CSP plots (Figure 5) • These results also tallied with the understanding of the human homunculus. The spatial patterns arising from the 3 MI achieved a distinct, focused point of activation.As AR is not a type of MI, the activation in the spatial pattern for this MA is not well-defined. Conclusion • Results show that four types of mental activities, left-hand (L), right-hand (R) and foot (F), and mental arithmetic (AR) could be classified with an accuracy of 76.39% and thus, employed in virtual speller • L and R are more accurate compared to other classes; algorithm used was originally designed for these two types of MI • Undo function allowed error correction, while text prediction function improved the usability of the virtual speller as it decreased the time taken to type a five-letter word by 56.39% • Future extension includes an auto-elimination feature, which automatically eliminates the letters that must not follow the previous letter chosen and shorten the number of trials required, improving its usability for real-world applications Figure 5: Common Spatial Pattern (CSP) Plots • Key References: • J. R. Wolpaw, N. Birbaumer, D. J. McFarland, et al., "Brain Computer interfaces for communication and control," Clin Neurophysiol., vol. 113, 2002. • K. K. Ang, Z. Y. Chin, H. Zhang, et al., "Filter Bank Common Spatial Pattern (FBCSP) in Brain-Computer Interface," in Proc. IJCNN'08, 2008, pp. 2390-2397. • Ramoser, J. Muller-Gerking, and G. Pfurtscheller, "Optimal spatial filtering of single trial EEG during imagined hand movement," IEEE Trans Rehabil Eng., vol. 8, pp. 441-446, 2000. • L.A. Farwell and E. Donchin, "Talking off the top of your head: toward a mental prosthesis utilizing event-related brain potentials," Electroencephalogr Clin Neurophysiol., vol. 70, pp. 510-523, 1988. • F.-B. Vialatte, M. Maurice, J. Dauwels, et al., "Steady-state visually evoked potentials: Focus on essential paradigms and future perspectives," Progress in Neurobiology, vol. 90, pp. 418-438, 2010. • Gernot R Müller-Putz, Reinhold Scherer, C. Brauneis, et al., "Steady-state visual evoked potential (SSVEP)-based communication: impact of harmonic frequency components," J Neural Eng, vol. 2, pp. 123-126, 2005. • All photographs are taken from research institution, plots and images are self-drawn.

More Related