<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Kubánek, J</style></author><author><style face="normal" font="default" size="100%">Jeremy Jeremy Hill</style></author><author><style face="normal" font="default" size="100%">Snyder, Lawrence H.</style></author><author><style face="normal" font="default" size="100%">Gerwin Schalk</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Cortical alpha activity predicts the confidence in an impending action.</style></title><secondary-title><style face="normal" font="default" size="100%">Front. Neurosci</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">certainty</style></keyword><keyword><style  face="normal" font="default" size="100%">EEG</style></keyword><keyword><style  face="normal" font="default" size="100%">human</style></keyword><keyword><style  face="normal" font="default" size="100%">neural correlates</style></keyword><keyword><style  face="normal" font="default" size="100%">perceptual decision-making</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2015</style></year><pub-dates><date><style  face="normal" font="default" size="100%">07/2015</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://journal.frontiersin.org/article/10.3389/fnins.2015.00243/abstract</style></url></web-urls></urls><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">When we make a decision, we experience a degree of confidence that our choice may lead to a desirable outcome. Recent studies in animals have probed the subjective aspects of the choice confidence using confidence-reporting tasks. These studies showed that estimates of the choice confidence substantially modulate neural activity in multiple regions of the brain. Building on these findings, we investigated the neural representation of the confidence in a choice in humans who explicitly reported the confidence in their choice. Subjects performed a perceptual decision task in which they decided between choosing a button press or a saccade while we recorded EEG activity. Following each choice, subjects indicated whether they were sure or unsure about the choice. We found that alpha activity strongly encodes a subject's confidence level in a forthcoming button press choice. The neural effect of the subjects' confidence was independent of the reaction time and independent of the sensory input modeled as a decision variable. Furthermore, the effect is not due to a general cognitive state, such as reward expectation, because the effect was specifically observed during button press choices and not during saccade choices. The neural effect of the confidence in the ensuing button press choice was strong enough that we could predict, from independent single trial neural signals, whether a subject was going to be sure or unsure of an ensuing button press choice. In sum, alpha activity in human cortex provides a window into the commitment to make a hand movement.</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Jeremy Jeremy Hill</style></author><author><style face="normal" font="default" size="100%">Häuser, Ann-Katrin</style></author><author><style face="normal" font="default" size="100%">Gerwin Schalk</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">A general method for assessing brain–computer interface performance and its limitations.</style></title><secondary-title><style face="normal" font="default" size="100%">Journal of Neural Engineering</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">brain-computer interface</style></keyword><keyword><style  face="normal" font="default" size="100%">information gain</style></keyword><keyword><style  face="normal" font="default" size="100%">information transfer rate</style></keyword><keyword><style  face="normal" font="default" size="100%">Neuroprosthetics</style></keyword><keyword><style  face="normal" font="default" size="100%">performance evaluation</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2014</style></year><pub-dates><date><style  face="normal" font="default" size="100%">03/2014</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/24658406</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">11</style></volume><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Objective. When researchers evaluate brain–computer interface (BCI) systems, we want quantitative answers to questions such as: How good is the system's performance? How good does it need to be? and: Is it capable of reaching the desired level in future? In response to the current lack of objective, quantitative, study-independent approaches, we introduce methods that help to address such questions. We identified three challenges: (I) the need for efficient measurement techniques that adapt rapidly and reliably to capture a wide range of performance levels; (II) the need to express results in a way that allows comparison between similar but non-identical tasks; (III) the need to measure the extent to which certain components of a BCI system (e.g. the signal processing pipeline) not only support BCI performance, but also potentially restrict the maximum level it can reach. Approach. For challenge (I), we developed an automatic staircase method that adjusted task difficulty adaptively along a single abstract axis. For challenge (II), we used the rate of information gain between two Bernoulli distributions: one reflecting the observed success rate, the other reflecting chance performance estimated by a matched random-walk method. This measure includes Wolpaw's information transfer rate as a special case, but addresses the latter's limitations including its restriction to item-selection tasks. To validate our approach and address challenge (III), we compared four healthy subjects' performance using an EEG-based BCI, a 'Direct Controller' (a high-performance hardware input device), and a 'Pseudo-BCI Controller' (the same input device, but with control signals processed by the BCI signal processing pipeline). Main results. Our results confirm the repeatability and validity of our measures, and indicate that our BCI signal processing pipeline reduced attainable performance by about 33% (21 bits/min). Significance. Our approach provides a flexible basis for evaluating BCI performance and its limitations, across a wide range of tasks and task difficulties.</style></abstract><issue><style face="normal" font="default" size="100%">026018</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Disha Gupta</style></author><author><style face="normal" font="default" size="100%">Jeremy Jeremy Hill</style></author><author><style face="normal" font="default" size="100%">Adamo, Matthew A</style></author><author><style face="normal" font="default" size="100%">A L Ritaccio</style></author><author><style face="normal" font="default" size="100%">Gerwin Schalk</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Localizing ECoG electrodes on the cortical anatomy without post-implantation imaging.</style></title><secondary-title><style face="normal" font="default" size="100%">Neuroimage Clin</style></secondary-title><alt-title><style face="normal" font="default" size="100%">Neuroimage Clin</style></alt-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">auditory processing</style></keyword><keyword><style  face="normal" font="default" size="100%">electrocorticography (ECoG)</style></keyword><keyword><style  face="normal" font="default" size="100%">electrode localization</style></keyword><keyword><style  face="normal" font="default" size="100%">fiducials</style></keyword><keyword><style  face="normal" font="default" size="100%">interaoperative localization</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2014</style></year><pub-dates><date><style  face="normal" font="default" size="100%">08/2014</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/25379417</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">6</style></volume><pages><style face="normal" font="default" size="100%">64-76</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;&lt;b&gt;INTRODUCTION: &lt;/b&gt;Electrocorticographic (ECoG) grids are placed subdurally on the cortex in people undergoing cortical resection to delineate eloquent cortex. ECoG signals have high spatial and temporal resolution and thus can be valuable for neuroscientific research. The value of these data is highest when they can be related to the cortical anatomy. Existing methods that establish this relationship rely either on post-implantation imaging using computed tomography (CT), magnetic resonance imaging (MRI) or X-Rays, or on intra-operative photographs. For research purposes, it is desirable to localize ECoG electrodes on the brain anatomy even when post-operative imaging is not available or when intra-operative photographs do not readily identify anatomical landmarks.&lt;/p&gt;&lt;p&gt;&lt;b&gt;METHODS: &lt;/b&gt;We developed a method to co-register ECoG electrodes to the underlying cortical anatomy using only a pre-operative MRI, a clinical neuronavigation device (such as BrainLab VectorVision), and fiducial markers. To validate our technique, we compared our results to data collected from six subjects who also had post-grid implantation imaging available. We compared the electrode coordinates obtained by our fiducial-based method to those obtained using existing methods, which are based on co-registering pre- and post-grid implantation images.&lt;/p&gt;&lt;p&gt;&lt;b&gt;RESULTS: &lt;/b&gt;Our fiducial-based method agreed with the MRI-CT method to within an average of 8.24 mm (mean, median = 7.10 mm) across 6 subjects in 3 dimensions. It showed an average discrepancy of 2.7 mm when compared to the results of the intra-operative photograph method in a 2D coordinate system. As this method does not require post-operative imaging such as CTs, our technique should prove useful for research in intra-operative single-stage surgery scenarios. To demonstrate the use of our method, we applied our method during real-time mapping of eloquent cortex during a single-stage surgery. The results demonstrated that our method can be applied intra-operatively in the absence of post-operative imaging to acquire ECoG signals that can be valuable for neuroscientific investigations.&lt;/p&gt;</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Jeremy Jeremy Hill</style></author><author><style face="normal" font="default" size="100%">Ricci, Erin</style></author><author><style face="normal" font="default" size="100%">Haider, Sameah</style></author><author><style face="normal" font="default" size="100%">McCane, Lynn M</style></author><author><style face="normal" font="default" size="100%">Susan M Heckman</style></author><author><style face="normal" font="default" size="100%">Jonathan Wolpaw</style></author><author><style face="normal" font="default" size="100%">Theresa M Vaughan</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">A practical, intuitive brain-computer interface for communicating 'yes' or 'no' by listening.</style></title><secondary-title><style face="normal" font="default" size="100%">J Neural Eng</style></secondary-title><alt-title><style face="normal" font="default" size="100%">J Neural Eng</style></alt-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Adult</style></keyword><keyword><style  face="normal" font="default" size="100%">Aged</style></keyword><keyword><style  face="normal" font="default" size="100%">Algorithms</style></keyword><keyword><style  face="normal" font="default" size="100%">Auditory Perception</style></keyword><keyword><style  face="normal" font="default" size="100%">brain-computer interfaces</style></keyword><keyword><style  face="normal" font="default" size="100%">Communication Aids for Disabled</style></keyword><keyword><style  face="normal" font="default" size="100%">Electroencephalography</style></keyword><keyword><style  face="normal" font="default" size="100%">Equipment Design</style></keyword><keyword><style  face="normal" font="default" size="100%">Equipment Failure Analysis</style></keyword><keyword><style  face="normal" font="default" size="100%">Female</style></keyword><keyword><style  face="normal" font="default" size="100%">Humans</style></keyword><keyword><style  face="normal" font="default" size="100%">Male</style></keyword><keyword><style  face="normal" font="default" size="100%">Man-Machine Systems</style></keyword><keyword><style  face="normal" font="default" size="100%">Middle Aged</style></keyword><keyword><style  face="normal" font="default" size="100%">Quadriplegia</style></keyword><keyword><style  face="normal" font="default" size="100%">Treatment Outcome</style></keyword><keyword><style  face="normal" font="default" size="100%">User-Computer Interface</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2014</style></year><pub-dates><date><style  face="normal" font="default" size="100%">06/2014</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/24838278</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">11</style></volume><pages><style face="normal" font="default" size="100%">035003</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">OBJECTIVE:
Previous work has shown that it is possible to build an EEG-based binary brain-computer interface system (BCI) driven purely by shifts of attention to auditory stimuli. However, previous studies used abrupt, abstract stimuli that are often perceived as harsh and unpleasant, and whose lack of inherent meaning may make the interface unintuitive and difficult for beginners. We aimed to establish whether we could transition to a system based on more natural, intuitive stimuli (spoken words 'yes' and 'no') without loss of performance, and whether the system could be used by people in the locked-in state.
APPROACH:
We performed a counterbalanced, interleaved within-subject comparison between an auditory streaming BCI that used beep stimuli, and one that used word stimuli. Fourteen healthy volunteers performed two sessions each, on separate days. We also collected preliminary data from two subjects with advanced amyotrophic lateral sclerosis (ALS), who used the word-based system to answer a set of simple yes-no questions.
MAIN RESULTS:
The N1, N2 and P3 event-related potentials elicited by words varied more between subjects than those elicited by beeps. However, the difference between responses to attended and unattended stimuli was more consistent with words than beeps. Healthy subjects' performance with word stimuli (mean 77% ± 3.3 s.e.) was slightly but not significantly better than their performance with beep stimuli (mean 73% ± 2.8 s.e.). The two subjects with ALS used the word-based BCI to answer questions with a level of accuracy similar to that of the healthy subjects.
SIGNIFICANCE:
Since performance using word stimuli was at least as good as performance using beeps, we recommend that auditory streaming BCI systems be built with word stimuli to make the system more pleasant and intuitive. Our preliminary data show that word-based streaming BCI is a promising tool for communication by people who are locked in.</style></abstract><issue><style face="normal" font="default" size="100%">3</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Disha Gupta</style></author><author><style face="normal" font="default" size="100%">Jeremy Jeremy Hill</style></author><author><style face="normal" font="default" size="100%">Peter Brunner</style></author><author><style face="normal" font="default" size="100%">Gunduz, Aysegul</style></author><author><style face="normal" font="default" size="100%">A L Ritaccio</style></author><author><style face="normal" font="default" size="100%">Gerwin Schalk</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Simultaneous Real-Time Monitoring of Multiple Cortical Systems.</style></title><secondary-title><style face="normal" font="default" size="100%">Journal of Neural Engineering</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">auditory processing</style></keyword><keyword><style  face="normal" font="default" size="100%">Electrocorticography</style></keyword><keyword><style  face="normal" font="default" size="100%">movement intention</style></keyword><keyword><style  face="normal" font="default" size="100%">realtime decoding</style></keyword><keyword><style  face="normal" font="default" size="100%">simultaneous decoding</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2014</style></year><pub-dates><date><style  face="normal" font="default" size="100%">10/2014</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/25080161</style></url></web-urls></urls><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">OBJECTIVE:
Real-time monitoring of the brain is potentially valuable for performance monitoring, communication, training or rehabilitation. In natural situations, the brain performs a complex mix of various sensory, motor or cognitive functions. Thus, real-time brain monitoring would be most valuable if (a) it could decode information from multiple brain systems simultaneously, and (b) this decoding of each brain system were robust to variations in the activity of other (unrelated) brain systems. Previous studies showed that it is possible to decode some information from different brain systems in retrospect and/or in isolation. In our study, we set out to determine whether it is possible to simultaneously decode important information about a user from different brain systems in real time, and to evaluate the impact of concurrent activity in different brain systems on decoding performance.
APPROACH:
We study these questions using electrocorticographic signals recorded in humans. We first document procedures for generating stable decoding models given little training data, and then report their use for offline and for real-time decoding from 12 subjects (six for offline parameter optimization, six for online experimentation). The subjects engage in tasks that involve movement intention, movement execution and auditory functions, separately, and then simultaneously. Main Results: Our real-time results demonstrate that our system can identify intention and movement periods in single trials with an accuracy of 80.4% and 86.8%, respectively (where 50% would be expected by chance). Simultaneously, the decoding of the power envelope of an auditory stimulus resulted in an average correlation coefficient of 0.37 between the actual and decoded power envelopes. These decoders were trained separately and executed simultaneously in real time.
SIGNIFICANCE:
This study yielded the first demonstration that it is possible to decode simultaneously the functional activity of multiple independent brain systems. Our comparison of univariate and multivariate decoding strategies, and our analysis of the influence of their decoding parameters, provides benchmarks and guidelines for future research on this topic.</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Farquhar, Jason</style></author><author><style face="normal" font="default" size="100%">Jeremy Jeremy Hill</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Interactions Between Pre-Processing and Classification Methods for Event-Related-Potential Classification : Best-Practice Guidelines for Brain-Computer Interfacing.</style></title><secondary-title><style face="normal" font="default" size="100%">Neuroinformatics</style></secondary-title><alt-title><style face="normal" font="default" size="100%">Neuroinformatics</style></alt-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">BCI</style></keyword><keyword><style  face="normal" font="default" size="100%">decoding</style></keyword><keyword><style  face="normal" font="default" size="100%">EEG</style></keyword><keyword><style  face="normal" font="default" size="100%">ERP</style></keyword><keyword><style  face="normal" font="default" size="100%">LDA</style></keyword><keyword><style  face="normal" font="default" size="100%">spatial filtering</style></keyword><keyword><style  face="normal" font="default" size="100%">spectral filtering</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2013</style></year><pub-dates><date><style  face="normal" font="default" size="100%">04/2013</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/23250668</style></url></web-urls></urls><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Detecting event related potentials (ERPs) from single trials is critical to the operation of many stimulus-driven brain computer interface (BCI) systems. The low strength of the ERP signal compared to the noise (due to artifacts and BCI irrelevant brain processes) makes this a challenging signal detection problem. Previous work has tended to focus on how best to detect a single ERP type (such as the visual oddball response). However, the underlying ERP detection problem is essentially the same regardless of stimulus modality (e.g. visual or tactile), ERP component (e.g. P300 oddball response, or the error-potential), measurement system or electrode layout. To investigate whether a single ERP detection method might work for a wider range of ERP BCIs we compare detection performance over a large corpus of more than 50 ERP BCI datasets whilst systematically varying the electrode montage, spectral filter, spatial filter and classifier training methods. We identify an interesting interaction between spatial whitening and regularised classification which made detection performance independent of the choice of spectral filter low-pass frequency. Our results show that pipeline consisting of spectral filtering, spatial whitening, and regularised classification gives near maximal performance in all cases. Importantly, this pipeline is simple to implement and completely automatic with no expert feature selection or parameter tuning required. Thus, we recommend this combination as a &quot;best-practice&quot; method for ERP detection problems.</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Jeremy Jeremy Hill</style></author><author><style face="normal" font="default" size="100%">Moinuddin, Aisha</style></author><author><style face="normal" font="default" size="100%">Häuser, Ann-Katrin</style></author><author><style face="normal" font="default" size="100%">Kienzle, Stephan</style></author><author><style face="normal" font="default" size="100%">Gerwin Schalk</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Communication and control by listening: towards optimal design of a two-class auditory streaming brain-computer interface.</style></title><secondary-title><style face="normal" font="default" size="100%">Frontiers in Neuroscience</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">auditory attention</style></keyword><keyword><style  face="normal" font="default" size="100%">auditory event-related potentials</style></keyword><keyword><style  face="normal" font="default" size="100%">brain-computer interface</style></keyword><keyword><style  face="normal" font="default" size="100%">dichotic listening</style></keyword><keyword><style  face="normal" font="default" size="100%">N1 potential</style></keyword><keyword><style  face="normal" font="default" size="100%">P3 potential</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2012</style></year><pub-dates><date><style  face="normal" font="default" size="100%">12/2012</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/23267312</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">6</style></volume><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Most brain-computer interface (BCI) systems require users to modulate brain signals in response to visual stimuli. Thus, they may not be useful to people with limited vision, such as those with severe paralysis. One important approach for overcoming this issue is auditory streaming, an approach whereby a BCI system is driven by shifts of attention between two simultaneously presented auditory stimulus streams. Motivated by the long-term goal of translating such a system into a reliable, simple yes-no interface for clinical usage, we aim to answer two main questions. First, we asked which of two previously published variants provides superior performance: a fixed-phase (FP) design in which the streams have equal period and opposite phase, or a drifting-phase (DP) design where the periods are unequal. We found FP to be superior to DP (p = 0.002): average performance levels were 80 and 72% correct, respectively. We were also able to show, in a pilot with one subject, that auditory streaming can support continuous control and neurofeedback applications: by shifting attention between ongoing left and right auditory streams, the subject was able to control the position of a paddle in a computer game. Second, we examined whether the system is dependent on eye movements, since it is known that eye movements and auditory attention may influence each other, and any dependence on the ability to move one’s eyes would be a barrier to translation to paralyzed users. We discovered that, despite instructions, some subjects did make eye movements that were indicative of the direction of attention. However, there was no correlation, across subjects, between the reliability of the eye movement signal and the reliability of the BCI system, indicating that our system was configured to work independently of eye movement. Together, these findings are an encouraging step forward toward BCIs that provide practical communication and control options for the most severely paralyzed users.
</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Jeremy Jeremy Hill</style></author><author><style face="normal" font="default" size="100%">Schölkopf, B</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">An online brain-computer interface based on shifting attention to concurrent streams of auditory stimuli.</style></title><secondary-title><style face="normal" font="default" size="100%">J Neural Eng</style></secondary-title><alt-title><style face="normal" font="default" size="100%">J Neural Eng</style></alt-title></titles><dates><year><style  face="normal" font="default" size="100%">2012</style></year><pub-dates><date><style  face="normal" font="default" size="100%">04/2012</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/22333135</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">9</style></volume><pages><style face="normal" font="default" size="100%">026011</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;We report on the development and online testing of an electroencephalogram-based brain-computer interface (BCI) that aims to be usable by completely paralysed users-for whom visual or motor-system-based BCIs may not be suitable, and among whom reports of successful BCI use have so far been very rare. The current approach exploits covert shifts of attention to auditory stimuli in a dichotic-listening stimulus design. To compare the efficacy of event-related potentials (ERPs) and steady-state auditory evoked potentials (SSAEPs), the stimuli were designed such that they elicited both ERPs and SSAEPs simultaneously. Trial-by-trial feedback was provided online, based on subjects' modulation of N1 and P3 ERP components measured during single 5 s stimulation intervals. All 13 healthy subjects were able to use the BCI, with performance in a binary left/right choice task ranging from 75% to 96% correct across subjects (mean 85%). BCI classification was based on the contrast between stimuli in the attended stream and stimuli in the unattended stream, making use of every stimulus, rather than contrasting frequent standard and rare 'oddball' stimuli. SSAEPs were assessed offline: for all subjects, spectral components at the two exactly known modulation frequencies allowed discrimination of pre-stimulus from stimulus intervals, and of left-only stimuli from right-only stimuli when one side of the dichotic stimulus pair was muted. However, attention modulation of SSAEPs was not sufficient for single-trial BCI communication, even when the subject's attention was clearly focused well enough to allow classification of the same trials via ERPs. ERPs clearly provided a superior basis for BCI. The ERP results are a promising step towards the development of a simple-to-use, reliable yes/no communication system for users in the most severely paralysed states, as well as potential attention-monitoring and -training applications outside the context of assistive technology.&lt;/span&gt;&lt;/p&gt;</style></abstract><issue><style face="normal" font="default" size="100%">2</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Jeremy Jeremy Hill</style></author><author><style face="normal" font="default" size="100%">Disha Gupta</style></author><author><style face="normal" font="default" size="100%">Peter Brunner</style></author><author><style face="normal" font="default" size="100%">Gunduz, Aysegul</style></author><author><style face="normal" font="default" size="100%">Adamo, Matthew A</style></author><author><style face="normal" font="default" size="100%">A L Ritaccio</style></author><author><style face="normal" font="default" size="100%">Gerwin Schalk</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Recording Human Electrocorticographic (ECoG) Signals for Neuroscientific Research and Real-time Functional Cortical Mapping.</style></title><secondary-title><style face="normal" font="default" size="100%">J Vis Exp</style></secondary-title><alt-title><style face="normal" font="default" size="100%">J Vis Exp</style></alt-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">BCI2000</style></keyword><keyword><style  face="normal" font="default" size="100%">brain-computer interfacing</style></keyword><keyword><style  face="normal" font="default" size="100%">Electrocorticography</style></keyword><keyword><style  face="normal" font="default" size="100%">epilepsy monitoring</style></keyword><keyword><style  face="normal" font="default" size="100%">functional brain mapping</style></keyword><keyword><style  face="normal" font="default" size="100%">issue 64</style></keyword><keyword><style  face="normal" font="default" size="100%">Magnetic Resonance Imaging</style></keyword><keyword><style  face="normal" font="default" size="100%">MRI</style></keyword><keyword><style  face="normal" font="default" size="100%">neuroscience</style></keyword><keyword><style  face="normal" font="default" size="100%">SIGFRIED</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2012</style></year><pub-dates><date><style  face="normal" font="default" size="100%">05/2012</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/22782131</style></url></web-urls></urls><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;Neuroimaging studies of human cognitive, sensory, and motor processes are usually based on noninvasive techniques such as electroencephalography (EEG), magnetoencephalography or functional magnetic-resonance imaging. These techniques have either inherently low temporal or low spatial resolution, and suffer from low signal-to-noise ratio and/or poor high-frequency sensitivity. Thus, they are suboptimal for exploring the short-lived spatio-temporal dynamics of many of the underlying brain processes. In contrast, the invasive technique of electrocorticography (ECoG) provides brain signals that have an exceptionally high signal-to-noise ratio, less susceptibility to artifacts than EEG, and a high spatial and temporal resolution (i.e., &amp;lt;1 cm/&amp;lt;1 millisecond, respectively). ECoG involves measurement of electrical brain signals using electrodes that are implanted subdurally on the surface of the brain. Recent studies have shown that ECoG amplitudes in certain frequency bands carry substantial information about task-related activity, such as motor execution and planning,&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;auditory&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;processing and visual-spatial attention. Most of this information is captured in the high gamma range (around 70-110 Hz). Thus, gamma activity has been proposed as a robust and general indicator of local cortical function. ECoG can also reveal functional connectivity and resolve finer task-related spatial-temporal dynamics, thereby advancing our understanding of large-scale cortical processes. It has especially proven useful for advancing brain-computer interfacing (BCI) technology for decoding a user's intentions to enhance or improve communication and control. Nevertheless, human ECoG data are often hard to obtain because of the risks and limitations of the invasive procedures involved, and the need to record within the constraints of clinical settings. Still, clinical monitoring to localize epileptic foci offers a unique and valuable opportunity to collect human ECoG data. We describe our methods for collecting recording ECoG, and demonstrate how to use these signals for important real-time applications such as clinical mapping and brain-computer interfacing. Our example uses the BCI2000 software platform and the SIGFRIED method, an application for real-time mapping of brain functions. This procedure yields information that clinicians can subsequently use to guide the complex and laborious process of functional mapping by electrical stimulation. PREREQUISITES AND PLANNING: Patients with drug-resistant partial epilepsy may be candidates for resective surgery of an epileptic focus to minimize the frequency of seizures. Prior to resection, the patients undergo monitoring using subdural electrodes for two purposes: first, to localize the epileptic focus, and second, to identify nearby critical brain areas (i.e., eloquent cortex) where resection could result in long-term functional deficits. To implant electrodes, a craniotomy is performed to open the skull. Then, electrode grids and/or strips are placed on the cortex, usually beneath the dura. A typical grid has a set of 8 x 8 platinum-iridium electrodes of 4 mm diameter (2.3 mm exposed surface) embedded in silicon with an inter-electrode distance of 1cm. A strip typically contains 4 or 6 such electrodes in a single line. The locations for these grids/strips are planned by a team of neurologists and neurosurgeons, and are based on previous EEG monitoring, on a structural MRI of the patient's brain, and on relevant factors of the patient's history. Continuous recording over a period of 5-12 days serves to localize epileptic foci, and electrical stimulation via the implanted electrodes allows clinicians to map eloquent cortex. At the end of the monitoring period, explantation of the electrodes and therapeutic resection are performed together in one procedure. In addition to its primary clinical purpose, invasive monitoring also provides a unique opportunity to acquire human ECoG data for neuroscientific research. The decision to include a prospective patient in the research is based on the planned location of their electrodes, on the patient's performance scores on neuropsychological assessments, and on their informed consent, which is predicated on their understanding that participation in research is optional and is not related to their treatment. As with all research involving human subjects, the research protocol must be approved by the hospital's institutional review board. The decision to perform individual experimental tasks is made day-by-day, and is contingent on the patient's endurance and willingness to participate. Some or all of the experiments may be prevented by problems with the clinical state of the patient, such as post-operative facial swelling, temporary aphasia, frequent seizures, post-ictal fatigue and confusion, and more general pain or discomfort. At the Epilepsy Monitoring Unit at Albany Medical Center in Albany, New York, clinical monitoring is implemented around the clock using a 192-channel Nihon-Kohden Neurofax monitoring system. Research recordings are made in collaboration with the Wadsworth Center of the New York State Department of Health in Albany. Signals from the ECoG electrodes are fed simultaneously to the research and the clinical systems via splitter connectors. To ensure that the clinical and research systems do not interfere with each other, the two systems typically use separate grounds. In fact, an epidural strip of electrodes is sometimes implanted to provide a ground for the clinical system. Whether research or clinical recording system, the grounding electrode is chosen to be distant from the predicted epileptic focus and from cortical areas of interest for the research. Our research system consists of eight synchronized 16-channel g.USBamp amplifier/digitizer units (g.tec, Graz, Austria). These were chosen because they are safety-rated and FDA-approved for invasive recordings, they have a very low noise-floor in the high-frequency range in which the signals of interest are found, and they come with an SDK that allows them to be integrated with custom-written research software. In order to capture the high-gamma signal accurately, we acquire signals at 1200Hz sampling rate-considerably higher than that of the typical EEG experiment or that of many clinical monitoring systems. A built-in low-pass filter automatically prevents aliasing of signals higher than the digitizer can capture. The patient's eye gaze is tracked using a monitor with a built-in Tobii T-60 eye-tracking system (Tobii Tech., Stockholm, Sweden). Additional accessories such as joystick, bluetooth Wiimote (Nintendo Co.), data-glove (5(th) Dimension Technologies), keyboard, microphone, headphones, or video camera are connected depending on the requirements of the particular experiment. Data collection, stimulus presentation, synchronization with the different input/output accessories, and real-time analysis and visualization are accomplished using our BCI2000 software. BCI2000 is a freely available general-purpose software system for real-time biosignal data acquisition, processing and feedback. It includes an array of pre-built modules that can be flexibly configured for many different purposes, and that can be extended by researchers' own code in C++, MATLAB or Python. BCI2000 consists of four modules that communicate with each other via a network-capable protocol: a Source module that handles the acquisition of brain signals from one of 19 different hardware systems from different manufacturers; a Signal Processing module that extracts relevant ECoG features and translates them into output signals; an Application module that delivers stimuli and feedback to the subject; and the Operator module that provides a graphical interface to the investigator. A number of different experiments may be conducted with any given patient. The priority of experiments will be determined by the location of the particular patient's electrodes. However, we usually begin our experimentation using the SIGFRIED (SIGnal modeling For Realtime Identification and Event Detection) mapping method, which detects and displays significant task-related activity in real time. The resulting functional map allows us to further tailor subsequent experimental protocols and may also prove as a useful starting point for traditional mapping by electrocortical stimulation (ECS). Although ECS mapping remains the gold standard for predicting the clinical outcome of resection, the process of ECS mapping is time consuming and also has other problems, such as after-discharges or seizures. Thus, a passive functional mapping technique may prove valuable in providing an initial estimate of the locus of eloquent cortex, which may then be confirmed and refined by ECS. The results from our passive SIGFRIED mapping technique have been shown to exhibit substantial concurrence with the results derived using ECS mapping. The protocol described in this paper establishes a general methodology for gathering human ECoG data, before proceeding to illustrate how experiments can be initiated using the BCI2000 software platform. Finally, as a specific example, we describe how to perform passive functional mapping using the BCI2000-based SIGFRIED system.&lt;/span&gt;&lt;/p&gt;</style></abstract><issue><style face="normal" font="default" size="100%">64</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Pei, Xiao-Mei</style></author><author><style face="normal" font="default" size="100%">Jeremy Jeremy Hill</style></author><author><style face="normal" font="default" size="100%">Gerwin Schalk</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Silent Communication: toward using brain signals.</style></title><secondary-title><style face="normal" font="default" size="100%">IEEE Pulse</style></secondary-title><alt-title><style face="normal" font="default" size="100%">IEEE Pulse</style></alt-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Animals</style></keyword><keyword><style  face="normal" font="default" size="100%">Brain</style></keyword><keyword><style  face="normal" font="default" size="100%">Brain Waves</style></keyword><keyword><style  face="normal" font="default" size="100%">Humans</style></keyword><keyword><style  face="normal" font="default" size="100%">Movement</style></keyword><keyword><style  face="normal" font="default" size="100%">User-Computer Interface</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2012</style></year><pub-dates><date><style  face="normal" font="default" size="100%">01/2012</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/22344951</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">3</style></volume><pages><style face="normal" font="default" size="100%">43-6</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;From the 1980s movie Firefox to the more recent Avatar, popular science fiction has speculated about the possibility of a persons thoughts being read directly from his or her brain. Such braincomputer interfaces (BCIs) might allow people who are paralyzed to communicate with and control their environment, and there might also be applications in military situations wherever silent user-to-user communication is desirable. Previous studies have shown that BCI systems can use brain signals related to movements and movement imagery or attention-based character selection. Although these systems have successfully demonstrated the possibility to control devices using brain function, directly inferring which word a person intends to communicate has been elusive. A BCI using imagined speech might provide such a practical, intuitive device. Toward this goal, our studies to date addressed two scientific questions: (1) Can brain signals accurately characterize different aspects of speech? (2) Is it possible to predict spoken or imagined words or their components using brain signals?&lt;/span&gt;&lt;/p&gt;</style></abstract><issue><style face="normal" font="default" size="100%">1</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Grosse-Wentrup, Moritz</style></author><author><style face="normal" font="default" size="100%">Schölkopf, B</style></author><author><style face="normal" font="default" size="100%">Jeremy Jeremy Hill</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Causal influence of gamma oscillations on the sensorimotor rhythm.</style></title><secondary-title><style face="normal" font="default" size="100%">Neuroimage</style></secondary-title><alt-title><style face="normal" font="default" size="100%">Neuroimage</style></alt-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Adult</style></keyword><keyword><style  face="normal" font="default" size="100%">Cerebral Cortex</style></keyword><keyword><style  face="normal" font="default" size="100%">Electroencephalography</style></keyword><keyword><style  face="normal" font="default" size="100%">Female</style></keyword><keyword><style  face="normal" font="default" size="100%">Humans</style></keyword><keyword><style  face="normal" font="default" size="100%">Imagination</style></keyword><keyword><style  face="normal" font="default" size="100%">Male</style></keyword><keyword><style  face="normal" font="default" size="100%">Signal Processing, Computer-Assisted</style></keyword><keyword><style  face="normal" font="default" size="100%">User-Computer Interface</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2011</style></year><pub-dates><date><style  face="normal" font="default" size="100%">05/2011</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/20451626</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">56</style></volume><pages><style face="normal" font="default" size="100%">837-42</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;Gamma oscillations of the electromagnetic field of the&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;brain&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;are known to be involved in a variety of cognitive processes, and are believed to be fundamental for information processing within the&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;brain&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;. While gamma oscillations have been shown to be correlated with&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;brain&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;rhythms at different frequencies, to date no empirical evidence has been presented that supports a causal influence of gamma oscillations on other&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;brain&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;rhythms. In this work, we study the relation of gamma oscillations and the sensorimotor rhythm (SMR) in healthy human subjects using electroencephalography. We first demonstrate that modulation of the SMR, induced by motor imagery of either the left or right hand, is positively correlated with the power of frontal and occipital gamma oscillations, and negatively correlated with the power of centro-parietal gamma oscillations. We then demonstrate that the most simple causal structure, capable of explaining the observed correlation of gamma oscillations and the SMR, entails a causal influence of gamma oscillations on the SMR. This finding supports the fundamental role attributed to gamma oscillations for information processing within the&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;brain&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;, and is of particular importance for&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;brain-computer interfaces&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;(BCIs). As modulation of the SMR is typically used in BCIs to infer a subject's intention, our findings entail that gamma oscillations have a causal influence on a subject's capability to utilize a&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;BCI&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;for means of communication.&lt;/span&gt;&lt;/p&gt;</style></abstract><issue><style face="normal" font="default" size="100%">2</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Gomez-Rodriguez, M</style></author><author><style face="normal" font="default" size="100%">Peters, J</style></author><author><style face="normal" font="default" size="100%">Jeremy Jeremy Hill</style></author><author><style face="normal" font="default" size="100%">Schölkopf, B</style></author><author><style face="normal" font="default" size="100%">Gharabaghi, A</style></author><author><style face="normal" font="default" size="100%">Grosse-Wentrup, Moritz</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Closing the sensorimotor loop: haptic feedback facilitates decoding of motor imagery.</style></title><secondary-title><style face="normal" font="default" size="100%">J Neural Eng</style></secondary-title><alt-title><style face="normal" font="default" size="100%">J Neural Eng</style></alt-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Brain</style></keyword><keyword><style  face="normal" font="default" size="100%">Evoked Potentials, Motor</style></keyword><keyword><style  face="normal" font="default" size="100%">Evoked Potentials, Somatosensory</style></keyword><keyword><style  face="normal" font="default" size="100%">Feedback, Physiological</style></keyword><keyword><style  face="normal" font="default" size="100%">Female</style></keyword><keyword><style  face="normal" font="default" size="100%">Humans</style></keyword><keyword><style  face="normal" font="default" size="100%">Imagination</style></keyword><keyword><style  face="normal" font="default" size="100%">Male</style></keyword><keyword><style  face="normal" font="default" size="100%">Movement</style></keyword><keyword><style  face="normal" font="default" size="100%">Robotics</style></keyword><keyword><style  face="normal" font="default" size="100%">Touch</style></keyword><keyword><style  face="normal" font="default" size="100%">User-Computer Interface</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2011</style></year><pub-dates><date><style  face="normal" font="default" size="100%">06/2011</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/21474878</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">8</style></volume><pages><style face="normal" font="default" size="100%">036005</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;The combination of&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;brain-computer interfaces&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;(BCIs) with robot-assisted physical therapy constitutes a promising approach to neurorehabilitation of patients with severe hemiparetic syndromes caused by cerebrovascular&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;brain&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;damage (e.g. stroke) and other neurological conditions. In such a scenario, a key aspect is how to reestablish the disrupted sensorimotor feedback loop. However, to date it is an open question how artificially closing the sensorimotor feedback loop influences the decoding performance of a&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;BCI&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;. In this paper, we answer this issue by studying six healthy subjects and two stroke patients. We present empirical evidence that haptic feedback, provided by a seven degrees of freedom robotic arm, facilitates online decoding of arm movement intention. The results support the feasibility of future rehabilitative treatments based on the combination of robot-assisted physical therapy with BCIs.&lt;/span&gt;&lt;/p&gt;</style></abstract><issue><style face="normal" font="default" size="100%">3</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Martens, S M M</style></author><author><style face="normal" font="default" size="100%">Mooij, J M</style></author><author><style face="normal" font="default" size="100%">Jeremy Jeremy Hill</style></author><author><style face="normal" font="default" size="100%">Farquhar, Jason</style></author><author><style face="normal" font="default" size="100%">Schölkopf, B</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">A graphical model framework for decoding in the visual ERP-based BCI speller.</style></title><secondary-title><style face="normal" font="default" size="100%">Neural Comput</style></secondary-title><alt-title><style face="normal" font="default" size="100%">Neural Comput</style></alt-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Artificial Intelligence</style></keyword><keyword><style  face="normal" font="default" size="100%">Computer User Training</style></keyword><keyword><style  face="normal" font="default" size="100%">Discrimination Learning</style></keyword><keyword><style  face="normal" font="default" size="100%">Electroencephalography</style></keyword><keyword><style  face="normal" font="default" size="100%">Evoked Potentials</style></keyword><keyword><style  face="normal" font="default" size="100%">Evoked Potentials, Visual</style></keyword><keyword><style  face="normal" font="default" size="100%">Humans</style></keyword><keyword><style  face="normal" font="default" size="100%">Language</style></keyword><keyword><style  face="normal" font="default" size="100%">Models, Neurological</style></keyword><keyword><style  face="normal" font="default" size="100%">Models, Theoretical</style></keyword><keyword><style  face="normal" font="default" size="100%">Reading</style></keyword><keyword><style  face="normal" font="default" size="100%">Signal Processing, Computer-Assisted</style></keyword><keyword><style  face="normal" font="default" size="100%">User-Computer Interface</style></keyword><keyword><style  face="normal" font="default" size="100%">Visual Cortex</style></keyword><keyword><style  face="normal" font="default" size="100%">Visual Perception</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2011</style></year><pub-dates><date><style  face="normal" font="default" size="100%">01/2011</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/20964540</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">23</style></volume><pages><style face="normal" font="default" size="100%">160-82</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;We present a graphical model framework for decoding in the visual ERP-based speller system. The proposed framework allows researchers to build generative models from which the decoding rules are obtained in a straightforward manner. We suggest two models for generating&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;brain&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;signals conditioned on the stimulus events. Both models incorporate letter frequency information but assume different dependencies between&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;brain&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;signals and stimulus events. For both models, we derive decoding rules and perform a discriminative training. We show on real visual speller data how decoding performance improves by incorporating letter frequency information and using a more realistic graphical model for the dependencies between the&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;brain&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;signals and the stimulus events. Furthermore, we discuss how the standard approach to decoding can be seen as a special case of the graphical model framework. The letter also gives more insight into the discriminative approach for decoding in the visual speller system.&lt;/span&gt;&lt;/p&gt;</style></abstract><issue><style face="normal" font="default" size="100%">1</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Murguialday, A Ramos</style></author><author><style face="normal" font="default" size="100%">Jeremy Jeremy Hill</style></author><author><style face="normal" font="default" size="100%">Bensch, M</style></author><author><style face="normal" font="default" size="100%">Martens, S M M</style></author><author><style face="normal" font="default" size="100%">S Halder</style></author><author><style face="normal" font="default" size="100%">Nijboer, F</style></author><author><style face="normal" font="default" size="100%">Schoelkopf, Bernhard</style></author><author><style face="normal" font="default" size="100%">Niels Birbaumer</style></author><author><style face="normal" font="default" size="100%">Gharabaghi, A</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Transition from the locked in to the completely locked-in state: a physiological analysis.</style></title><secondary-title><style face="normal" font="default" size="100%">Clin Neurophysiol</style></secondary-title><alt-title><style face="normal" font="default" size="100%">Clin Neurophysiol</style></alt-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Adult</style></keyword><keyword><style  face="normal" font="default" size="100%">Amyotrophic Lateral Sclerosis</style></keyword><keyword><style  face="normal" font="default" size="100%">Area Under Curve</style></keyword><keyword><style  face="normal" font="default" size="100%">Brain</style></keyword><keyword><style  face="normal" font="default" size="100%">Communication Aids for Disabled</style></keyword><keyword><style  face="normal" font="default" size="100%">Disease Progression</style></keyword><keyword><style  face="normal" font="default" size="100%">Electroencephalography</style></keyword><keyword><style  face="normal" font="default" size="100%">Electromyography</style></keyword><keyword><style  face="normal" font="default" size="100%">Humans</style></keyword><keyword><style  face="normal" font="default" size="100%">Male</style></keyword><keyword><style  face="normal" font="default" size="100%">Signal Processing, Computer-Assisted</style></keyword><keyword><style  face="normal" font="default" size="100%">User-Computer Interface</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2011</style></year><pub-dates><date><style  face="normal" font="default" size="100%">06/2011</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/20888292</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">122</style></volume><pages><style face="normal" font="default" size="100%">925-33</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;h4 style=&quot;font-size: 13px; margin: 0px 0.25em 0px 0px; text-transform: uppercase; float: left; font-family: arial, helvetica, clean, sans-serif; line-height: 17px;&quot;&gt;OBJECTIVE:&amp;nbsp;&lt;/h4&gt;
&lt;p style=&quot;margin: 0px 0px 0.5em; font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;To clarify the physiological and behavioral boundaries between locked-in (LIS) and the completely locked-in state (CLIS) (no voluntary eye movements, no communication possible) through electrophysiological data and to secure&amp;nbsp;&lt;span class=&quot;highlight&quot;&gt;brain-computer-interface&lt;/span&gt;&amp;nbsp;(&lt;span class=&quot;highlight&quot;&gt;BCI&lt;/span&gt;) communication.&lt;/p&gt;
&lt;h4 style=&quot;font-size: 13px; margin: 0px 0.25em 0px 0px; text-transform: uppercase; float: left; font-family: arial, helvetica, clean, sans-serif; line-height: 17px;&quot;&gt;METHODS:&amp;nbsp;&lt;/h4&gt;
&lt;p style=&quot;margin: 0px 0px 0.5em; font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;Electromyography from facial muscles, external anal sphincter (EAS), electrooculography and electrocorticographic data during different psychophysiological tests were acquired to define electrophysiological differences in an amyotrophic lateral sclerosis (ALS) patient with an intracranially implanted grid of 112 electrodes for nine months while the patient passed from the LIS to the CLIS.&lt;/p&gt;
&lt;h4 style=&quot;font-size: 13px; margin: 0px 0.25em 0px 0px; text-transform: uppercase; float: left; font-family: arial, helvetica, clean, sans-serif; line-height: 17px;&quot;&gt;RESULTS:&amp;nbsp;&lt;/h4&gt;
&lt;p style=&quot;margin: 0px 0px 0.5em; font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;At the very end of the LIS there was no facial muscle activity, nor external anal sphincter but eye control. Eye movements were slow and lasted for short periods only. During CLIS event related&amp;nbsp;&lt;span class=&quot;highlight&quot;&gt;brain&lt;/span&gt;potentials (ERP) to passive limb movements and auditory stimuli were recorded, vibrotactile stimulation of different body parts resulted in no ERP response.&lt;/p&gt;
&lt;h4 style=&quot;font-size: 13px; margin: 0px 0.25em 0px 0px; text-transform: uppercase; float: left; font-family: arial, helvetica, clean, sans-serif; line-height: 17px;&quot;&gt;CONCLUSIONS:&amp;nbsp;&lt;/h4&gt;
&lt;p style=&quot;margin: 0px 0px 0.5em; font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;The results presented contradict the commonly accepted assumption that the EAS is the last remaining muscle under voluntary control and demonstrate complete loss of eye movements in CLIS. The eye muscle was shown to be the last muscle group under voluntary control. The findings suggest ALS as a multisystem disorder, even affecting afferent sensory pathways.&lt;/p&gt;
&lt;h4 style=&quot;font-size: 13px; margin: 0px 0.25em 0px 0px; text-transform: uppercase; float: left; font-family: arial, helvetica, clean, sans-serif; line-height: 17px;&quot;&gt;SIGNIFICANCE:&amp;nbsp;&lt;/h4&gt;
&lt;p style=&quot;margin: 0px 0px 0.5em; font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;Auditory and proprioceptive&amp;nbsp;&lt;span class=&quot;highlight&quot;&gt;brain-computer-interface&lt;/span&gt;&amp;nbsp;(&lt;span class=&quot;highlight&quot;&gt;BCI&lt;/span&gt;) systems are the only remaining communication channels in CLIS.&lt;/p&gt;</style></abstract><issue><style face="normal" font="default" size="100%">5</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Haselager, Pim</style></author><author><style face="normal" font="default" size="100%">Vlek, Rutger</style></author><author><style face="normal" font="default" size="100%">Jeremy Jeremy Hill</style></author><author><style face="normal" font="default" size="100%">Nijboer, F</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">A note on ethical aspects of BCI.</style></title><secondary-title><style face="normal" font="default" size="100%">Neural Netw</style></secondary-title><alt-title><style face="normal" font="default" size="100%">Neural Netw</style></alt-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Bioethics</style></keyword><keyword><style  face="normal" font="default" size="100%">Brain</style></keyword><keyword><style  face="normal" font="default" size="100%">Communication</style></keyword><keyword><style  face="normal" font="default" size="100%">Communications Media</style></keyword><keyword><style  face="normal" font="default" size="100%">Cooperative Behavior</style></keyword><keyword><style  face="normal" font="default" size="100%">Humans</style></keyword><keyword><style  face="normal" font="default" size="100%">Informed Consent</style></keyword><keyword><style  face="normal" font="default" size="100%">Professional-Patient Relations</style></keyword><keyword><style  face="normal" font="default" size="100%">Quadriplegia</style></keyword><keyword><style  face="normal" font="default" size="100%">User-Computer Interface</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2009</style></year><pub-dates><date><style  face="normal" font="default" size="100%">11/2009 </style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/19616405</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">22</style></volume><pages><style face="normal" font="default" size="100%">1352-7</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;This paper focuses on ethical aspects of BCI, as a research and a clinical tool, that are challenging for practitioners currently working in the field. Specifically, the difficulties involved in acquiring informed consent from locked-in patients are investigated, in combination with an analysis of the shared moral responsibility in BCI teams, and the complications encountered in establishing effective communication with media.&lt;/span&gt;&lt;/p&gt;</style></abstract><issue><style face="normal" font="default" size="100%">9</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Martens, S M M</style></author><author><style face="normal" font="default" size="100%">Jeremy Jeremy Hill</style></author><author><style face="normal" font="default" size="100%">Farquhar, Jason</style></author><author><style face="normal" font="default" size="100%">Schölkopf, B</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Overlap and refractory effects in a brain-computer interface speller based on the visual P300 event-related potential.</style></title><secondary-title><style face="normal" font="default" size="100%">J Neural Eng</style></secondary-title><alt-title><style face="normal" font="default" size="100%">J Neural Eng</style></alt-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Algorithms</style></keyword><keyword><style  face="normal" font="default" size="100%">Brain</style></keyword><keyword><style  face="normal" font="default" size="100%">Cognition</style></keyword><keyword><style  face="normal" font="default" size="100%">Computer Simulation</style></keyword><keyword><style  face="normal" font="default" size="100%">Electroencephalography</style></keyword><keyword><style  face="normal" font="default" size="100%">Event-Related Potentials, P300</style></keyword><keyword><style  face="normal" font="default" size="100%">Humans</style></keyword><keyword><style  face="normal" font="default" size="100%">Models, Neurological</style></keyword><keyword><style  face="normal" font="default" size="100%">Pattern Recognition, Automated</style></keyword><keyword><style  face="normal" font="default" size="100%">Photic Stimulation</style></keyword><keyword><style  face="normal" font="default" size="100%">Semantics</style></keyword><keyword><style  face="normal" font="default" size="100%">Signal Processing, Computer-Assisted</style></keyword><keyword><style  face="normal" font="default" size="100%">Task Performance and Analysis</style></keyword><keyword><style  face="normal" font="default" size="100%">User-Computer Interface</style></keyword><keyword><style  face="normal" font="default" size="100%">Writing</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2009</style></year><pub-dates><date><style  face="normal" font="default" size="100%">04/2009</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/19255462</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">6</style></volume><pages><style face="normal" font="default" size="100%">026003</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;We reveal the presence of refractory and overlap effects in the event-related potentials in visual P300 speller datasets, and we show their negative impact on the performance of the system. This finding has important implications for how to encode the letters that can be selected for communication. However, we show that such effects are dependent on stimulus parameters: an alternative stimulus type based on apparent motion suffers less from the refractory effects and leads to an improved letter prediction performance.&lt;/span&gt;&lt;/p&gt;</style></abstract><issue><style face="normal" font="default" size="100%">2</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Hinterberger, T.</style></author><author><style face="normal" font="default" size="100%">Widman, Guido</style></author><author><style face="normal" font="default" size="100%">Lal, T.N</style></author><author><style face="normal" font="default" size="100%">Jeremy Jeremy Hill</style></author><author><style face="normal" font="default" size="100%">Tangermann, Michael</style></author><author><style face="normal" font="default" size="100%">Rosenstiel, W.</style></author><author><style face="normal" font="default" size="100%">Schölkopf, B</style></author><author><style face="normal" font="default" size="100%">Elger, Christian</style></author><author><style face="normal" font="default" size="100%">Niels Birbaumer</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Voluntary brain regulation and communication with electrocorticogram signals.</style></title><secondary-title><style face="normal" font="default" size="100%">Epilepsy Behav</style></secondary-title><alt-title><style face="normal" font="default" size="100%">Epilepsy Behav</style></alt-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Adult</style></keyword><keyword><style  face="normal" font="default" size="100%">Biofeedback, Psychology</style></keyword><keyword><style  face="normal" font="default" size="100%">Cerebral Cortex</style></keyword><keyword><style  face="normal" font="default" size="100%">Communication Aids for Disabled</style></keyword><keyword><style  face="normal" font="default" size="100%">Dominance, Cerebral</style></keyword><keyword><style  face="normal" font="default" size="100%">Electroencephalography</style></keyword><keyword><style  face="normal" font="default" size="100%">Epilepsies, Partial</style></keyword><keyword><style  face="normal" font="default" size="100%">Female</style></keyword><keyword><style  face="normal" font="default" size="100%">Humans</style></keyword><keyword><style  face="normal" font="default" size="100%">Imagination</style></keyword><keyword><style  face="normal" font="default" size="100%">Male</style></keyword><keyword><style  face="normal" font="default" size="100%">Middle Aged</style></keyword><keyword><style  face="normal" font="default" size="100%">Motor Activity</style></keyword><keyword><style  face="normal" font="default" size="100%">Motor Cortex</style></keyword><keyword><style  face="normal" font="default" size="100%">Signal Processing, Computer-Assisted</style></keyword><keyword><style  face="normal" font="default" size="100%">Software</style></keyword><keyword><style  face="normal" font="default" size="100%">Somatosensory Cortex</style></keyword><keyword><style  face="normal" font="default" size="100%">Theta Rhythm</style></keyword><keyword><style  face="normal" font="default" size="100%">User-Computer Interface</style></keyword><keyword><style  face="normal" font="default" size="100%">Writing</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2008</style></year><pub-dates><date><style  face="normal" font="default" size="100%">08/2008</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/18495541</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">13</style></volume><pages><style face="normal" font="default" size="100%">300-6</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;Brain-computer interfaces&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;(BCIs) can be used for communication in writing without muscular activity or for learning to control seizures by voluntary regulation of&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;brain&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;signals such as the electroencephalogram (EEG). Three of five patients with epilepsy were able to spell their names with electrocorticogram (ECoG) signals derived from motor-related areas within only one or two training sessions. Imagery of finger or tongue movements was classified with support-vector classification of autoregressive coefficients derived from the ECoG signals. After training of the classifier, binary classification responses were used to select letters from a&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;computer&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;-generated menu. Offline analysis showed increased theta activity in the unsuccessful patients, whereas the successful patients exhibited dominant sensorimotor rhythms that they could control. The high spatial resolution and increased signal-to-noise ratio in ECoG signals, combined with short training periods, may offer an alternative for communication in complete paralysis, locked-in syndrome, and motor restoration.&lt;/span&gt;&lt;/p&gt;</style></abstract><issue><style face="normal" font="default" size="100%">2</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>5</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Hinterberger, T.</style></author><author><style face="normal" font="default" size="100%">Nijboer, F</style></author><author><style face="normal" font="default" size="100%">Kübler, A.</style></author><author><style face="normal" font="default" size="100%">Matuz, T.</style></author><author><style face="normal" font="default" size="100%">Adrian Furdea</style></author><author><style face="normal" font="default" size="100%">Mochty, Ursula</style></author><author><style face="normal" font="default" size="100%">Jordan, M.</style></author><author><style face="normal" font="default" size="100%">Lal, T.N</style></author><author><style face="normal" font="default" size="100%">Jeremy Jeremy Hill</style></author><author><style face="normal" font="default" size="100%">Mellinger, Jürgen</style></author><author><style face="normal" font="default" size="100%">Bensch, M</style></author><author><style face="normal" font="default" size="100%">Tangermann, Michael</style></author><author><style face="normal" font="default" size="100%">Widmann, G.</style></author><author><style face="normal" font="default" size="100%">Elger, Christian</style></author><author><style face="normal" font="default" size="100%">Rosenstiel, W.</style></author><author><style face="normal" font="default" size="100%">Schölkopf, B</style></author><author><style face="normal" font="default" size="100%">Niels Birbaumer</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Brain Computer Interfaces for Communication in Paralysis: a Clinical-Experimental Approach.</style></title></titles><keywords><keyword><style  face="normal" font="default" size="100%">brain-computer interfaces</style></keyword><keyword><style  face="normal" font="default" size="100%">EEG</style></keyword><keyword><style  face="normal" font="default" size="100%">experiment</style></keyword><keyword><style  face="normal" font="default" size="100%">Medical sciences Medicine</style></keyword><keyword><style  face="normal" font="default" size="100%">paralyzed patients</style></keyword><keyword><style  face="normal" font="default" size="100%">slow cortical potentials</style></keyword><keyword><style  face="normal" font="default" size="100%">Thought-Translation Device</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2007</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://psydok.sulb.uni-saarland.de/volltexte/2008/2154/</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">Virtual Library of Psychology at Saarland University and State Library, GERMANY, PsyDok [http://psydok.sulb.uni-saarland.de/phpoai/oai2.php] (Germany)</style></publisher><isbn><style face="normal" font="default" size="100%">9780262256049</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;&lt;span style=&quot;color: #333333; font-family: sans-serif; font-size: 15px; line-height: 24px;&quot;&gt;An overview of different approaches to brain-computer interfaces (BCIs) developed in our laboratory is given. An important clinical application of BCIs is to enable communication or environmental control in severely paralyzed patients. The BCI “Thought-Translation Device (TTD)” allows verbal communication through the voluntary self-regulation of brain signals (e.g., slow cortical potentials (SCPs)), which is achieved by operant feedback training. Humans' ability to self-regulate their SCPs is used to move a cursor toward a target that contains a selectable letter set. Two different approaches were followed to developWeb browsers that could be controlled with binary brain responses. Implementing more powerful classification methods including different signal parameters such as oscillatory features improved our BCI considerably. It was also tested on signals with implanted electrodes. Most BCIs provide the user with a visual feedback interface. Visually impaired patients require an auditory feedback mode. A procedure using auditory (sonified) feedback of multiple EEG parameters was evaluated. Properties of the auditory systems are reported and the results of two experiments with auditory feedback are presented. Clinical data of eight ALS patients demonstrated that all patients were able to acquire efficient brain control of one of the three available BCI systems (SCP, µ-rhythm, and P300), most of them used the SCP-BCI. A controlled comparison of the three systems in a group of ALS patients, however, showed that P300-BCI and the µ-BCI are faster and more easily acquired than SCP-BCI, at least in patients with some rudimentary motor control left. Six patients who started BCI training after entering the completely locked-in state did not achieve reliable communication skills with any BCI system. One completely locked-in patient was able t o communicate shortly with a ph-meter, but lost control afterward.&lt;/span&gt;&lt;/p&gt;</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Jeremy Jeremy Hill</style></author><author><style face="normal" font="default" size="100%">Lal, T.N</style></author><author><style face="normal" font="default" size="100%">Schröder, Michael</style></author><author><style face="normal" font="default" size="100%">Hinterberger, T.</style></author><author><style face="normal" font="default" size="100%">Wilhelm, Barbara</style></author><author><style face="normal" font="default" size="100%">Nijboer, F</style></author><author><style face="normal" font="default" size="100%">Mochty, Ursula</style></author><author><style face="normal" font="default" size="100%">Widman, Guido</style></author><author><style face="normal" font="default" size="100%">Elger, Christian</style></author><author><style face="normal" font="default" size="100%">Schölkopf, B</style></author><author><style face="normal" font="default" size="100%">Kübler, A.</style></author><author><style face="normal" font="default" size="100%">Niels Birbaumer</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Classifying EEG and ECoG signals without subject training for fast BCI implementation: comparison of nonparalyzed and completely paralyzed subjects.</style></title><secondary-title><style face="normal" font="default" size="100%">IEEE Trans Neural Syst Rehabil Eng</style></secondary-title><alt-title><style face="normal" font="default" size="100%">IEEE Trans Neural Syst Rehabil Eng</style></alt-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Algorithms</style></keyword><keyword><style  face="normal" font="default" size="100%">Artificial Intelligence</style></keyword><keyword><style  face="normal" font="default" size="100%">Cluster Analysis</style></keyword><keyword><style  face="normal" font="default" size="100%">Computer User Training</style></keyword><keyword><style  face="normal" font="default" size="100%">Electroencephalography</style></keyword><keyword><style  face="normal" font="default" size="100%">Evoked Potentials</style></keyword><keyword><style  face="normal" font="default" size="100%">Female</style></keyword><keyword><style  face="normal" font="default" size="100%">Humans</style></keyword><keyword><style  face="normal" font="default" size="100%">Imagination</style></keyword><keyword><style  face="normal" font="default" size="100%">Male</style></keyword><keyword><style  face="normal" font="default" size="100%">Middle Aged</style></keyword><keyword><style  face="normal" font="default" size="100%">Paralysis</style></keyword><keyword><style  face="normal" font="default" size="100%">Pattern Recognition, Automated</style></keyword><keyword><style  face="normal" font="default" size="100%">User-Computer Interface</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2006</style></year><pub-dates><date><style  face="normal" font="default" size="100%">06/2006</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/16792289</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">14</style></volume><pages><style face="normal" font="default" size="100%">183-6</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;We summarize results from a series of related studies that aim to develop a motor-imagery-&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;based&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;brain-computer interface&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;using a single recording session of electroencephalogram (EEG) or electrocorticogram (ECoG) signals for each subject. We apply the same experimental and analytical methods to 11 nonparalysed subjects (eight EEG, three ECoG), and to five paralyzed subjects (four EEG, one ECoG) who had been unable to communicate for some time. While it was relatively easy to obtain classifiable signals quickly from most of the nonparalyzed subjects, it proved impossible to classify the signals obtained from the paralyzed patients by the same methods. This highlights the fact that though certain BCI paradigms may work well with healthy subjects, this does not necessarily indicate success with the target user group. We outline possible reasons for this failure to transfer.&lt;/span&gt;&lt;/p&gt;</style></abstract><issue><style face="normal" font="default" size="100%">2</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>5</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Jeremy Jeremy Hill</style></author><author><style face="normal" font="default" size="100%">Lal, T.N</style></author><author><style face="normal" font="default" size="100%">Schröder, Michael</style></author><author><style face="normal" font="default" size="100%">Hinterberger, T.</style></author><author><style face="normal" font="default" size="100%">Widman, Guido</style></author><author><style face="normal" font="default" size="100%">Elger, Christian</style></author><author><style face="normal" font="default" size="100%">Schölkopf, B</style></author><author><style face="normal" font="default" size="100%">Niels Birbaumer</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Franke, Katrin</style></author><author><style face="normal" font="default" size="100%">Müller, Klaus-Robert</style></author><author><style face="normal" font="default" size="100%">Nickolay, Bertram</style></author><author><style face="normal" font="default" size="100%">Schäfer, Ralf</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">Classifying Event-Related Desynchronization in EEG, ECoG and MEG Signals.</style></title><secondary-title><style face="normal" font="default" size="100%">Pattern Recognition</style></secondary-title><tertiary-title><style face="normal" font="default" size="100%">Lecture Notes in Computer Science</style></tertiary-title></titles><dates><year><style  face="normal" font="default" size="100%">2006</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://dx.doi.org/10.1007/11861898_41</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">Springer Berlin / Heidelberg</style></publisher><volume><style face="normal" font="default" size="100%">4174</style></volume><pages><style face="normal" font="default" size="100%">404-413</style></pages><isbn><style face="normal" font="default" size="100%">978-3-540-44412-1</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;&lt;span style=&quot;color: #333333; font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif; font-size: 13px; line-height: 20px;&quot;&gt;We employed three different brain signal recording methods to perform Brain-Computer Interface studies on untrained subjects. In all cases, we aim to develop a system that could be used for fast, reliable preliminary screening in clinical BCI application, and we are interested in knowing how long screening sessions need to be. Good performance could be achieved, on average, after the first 200 trials in EEG, 75–100 trials in MEG, or 25–50 trials in ECoG. We compare the performance of Independent Component Analysis and the Common Spatial Pattern algorithm in each of the three sensor types, finding that spatial filtering does not help in MEG, helps a little in ECoG, and improves performance a great deal in EEG. In all cases the unsupervised ICA algorithm performed at least as well as the supervised CSP algorithm, which can suffer from poor generalization performance due to overfitting, particularly in ECoG and MEG.&lt;/span&gt;&lt;/p&gt;</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Farquhar, Jason</style></author><author><style face="normal" font="default" size="100%">Jeremy Jeremy Hill</style></author><author><style face="normal" font="default" size="100%">Schölkopf, B</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Optimizing Spatial Filters for BCI: Margin- and Evidence-Maximization Approaches.</style></title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Brain Computer Interfaces</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2006</style></year><pub-dates><date><style  face="normal" font="default" size="100%">11/2006</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.researchgate.net/publication/237615110_Optimizing_Spatial_Filters_for_BCI</style></url></web-urls></urls><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;We present easy-to-use alternatives to the often-used two-stage Common Spatial Pattern + classifier approach for spatial filtering and classification of Event-Related Desynchronization signals in BCI. We report two algorithms that aim to optimize the spatial filters according to a criterion more directly related to the ability of the algorithms to generalize to unseen data. Both are based upon the idea of treating the spatial filter coefficients as hyperparameters of a kernel or covariance function. We then optimize these hyper-parameters directly along side the normal classifier parameters with respect to our chosen learning objective function. The two objectives considered are margin maximization as used in Support-Vector Machines and the evidence maximization framework used in Gaussian Processes. Our experiments assessed generalization error as a function of the number of training points used, on 9 BCI competition data sets and 5 offline motor imagery data sets measured in Tubingen. Both our approaches sho w consistent improvements relative to the commonly used CSP+linear classifier combination. Strikingly, the improvement is most significant in the higher noise cases, when either few trails are used for training, or with the most poorly performing subjects. This a reversal of the usual &quot;rich get richer&quot; effect in the development of CSP extensions, which tend to perform best when the signal is strong enough to accurately find their additional parameters. This makes our approach particularly suitable for clinical application where high levels of noise are to be expected.&lt;/p&gt;</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Farquhar, Jason</style></author><author><style face="normal" font="default" size="100%">Jeremy Jeremy Hill</style></author><author><style face="normal" font="default" size="100%">Lal, T.N</style></author><author><style face="normal" font="default" size="100%">Schölkopf, B</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Regularised CSP for Sensor Selection in BCI.</style></title></titles><dates><year><style  face="normal" font="default" size="100%">2006</style></year><pub-dates><date><style  face="normal" font="default" size="100%">01/2006</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://edoc.mpg.de/312060</style></url></web-urls></urls><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;The Common Spatial Pattern (CSP) algorithm is a highly successful method for efficiently calculating spatial filters for brain signal classification. Spatial filtering can improve classification performance considerably, but demands that a large number of electrodes be mounted, which is inconvenient in day-to-day BCI usage. The CSP algorithm is also known for its tendency to overfit, i.e. to learn the noise in the training set rather than the signal. Both problems motivate an approach in which spatial filters are sparsified. We briefly sketch a reformulation of the problem which allows us to do this, using 1-norm regularisation. Focusing on the electrode selection issue, we present preliminary results on EEG data sets that suggest that effective spatial filters may be computed with as few as 10–20 electrodes, hence offering the potential to simplify the practical realisation of BCI systems significantly.&lt;/p&gt;</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Jeremy Jeremy Hill</style></author><author><style face="normal" font="default" size="100%">Farquhar, Jason</style></author><author><style face="normal" font="default" size="100%">Lal, T.N</style></author><author><style face="normal" font="default" size="100%">Schölkopf, B</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Time-Dependent Demixing of Task-Relevant EEG Signals.</style></title></titles><dates><year><style  face="normal" font="default" size="100%">2006</style></year><pub-dates><date><style  face="normal" font="default" size="100%">09/2006</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://edoc.mpg.de/312053</style></url></web-urls></urls><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Given a spatial filtering algorithm that has allowed us to identify task-relevant EEG sources, we present a simple approach for monitoring the activity of these sources while remaining relatively robust to changes in other (task-irrelevant) brain activity. The idea is to keep spatial *patterns* fixed rather than spatial filters, when transferring from training to test sessions or from one time window to another. We show that a fixed spatial pattern (FSP) approach, using a moving-window estimate of signal covariances, can be more robust to non-stationarity than a fixed spatial filter (FSF) approach.&lt;/p&gt;</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Lal, T.N</style></author><author><style face="normal" font="default" size="100%">Schroeder, Michael</style></author><author><style face="normal" font="default" size="100%">Jeremy Jeremy Hill</style></author><author><style face="normal" font="default" size="100%">Preissl, Hubert</style></author><author><style face="normal" font="default" size="100%">Hinterberger, T.</style></author><author><style face="normal" font="default" size="100%">Mellinger, Jürgen</style></author><author><style face="normal" font="default" size="100%">Bogdan, Martin</style></author><author><style face="normal" font="default" size="100%">Rosenstiel, W.</style></author><author><style face="normal" font="default" size="100%">Niels Birbaumer</style></author><author><style face="normal" font="default" size="100%">Schoelkopf, Bernhard</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">A Brain Computer Interface with Online Feedback based on Magnetoencephalography.</style></title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Brain Computer Interfaces</style></keyword><keyword><style  face="normal" font="default" size="100%">User Modelling for Computer Human Interaction</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2005</style></year><pub-dates><date><style  face="normal" font="default" size="100%">08/2005</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.researchgate.net/publication/221346004_A_brain_computer_interface_with_online_feedback_based_on_magnetoencephalography</style></url></web-urls></urls><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;&lt;span style=&quot;font-family: arial, sans-serif; font-size: 12px; line-height: 16px;&quot;&gt;The aim of this paper is to show that machine learning techniques can be used to derive a classifying function for human brain signal data measured by magnetoencephalography (MEG), for the use in a brain computer interface (BCI). This is especially helpful for evaluating quickly whether a BCI approach based on electroencephalography, on which training may be slower due to lower signalto-noise ratio, is likely to succeed. We apply RCE and regularized SVMs to the experimental data of ten healthy subjects performing a motor imagery task. Four subjects were able to use a trained classifier to write a short name. Further analysis gives evidence that the proposed imagination task is suboptimal for the possible extension to a multiclass interface. To the best of our knowledge this paper is the first working online MEG-based BCI and is therefore a “proof of concept”.&lt;/span&gt;&lt;/p&gt;</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Lal, T.N</style></author><author><style face="normal" font="default" size="100%">Hinterberger, T.</style></author><author><style face="normal" font="default" size="100%">Widman, Guido</style></author><author><style face="normal" font="default" size="100%">Schroeder, Michael</style></author><author><style face="normal" font="default" size="100%">Jeremy Jeremy Hill</style></author><author><style face="normal" font="default" size="100%">Rosenstiel, W.</style></author><author><style face="normal" font="default" size="100%">Elger, Christian</style></author><author><style face="normal" font="default" size="100%">Schölkopf, B</style></author><author><style face="normal" font="default" size="100%">Niels Birbaumer</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Methods Towards Invasive Human Brain Computer Interfaces.</style></title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Brain Computer Interfaces</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2005</style></year><pub-dates><date><style  face="normal" font="default" size="100%">2005</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.64.8486</style></url></web-urls></urls><isbn><style face="normal" font="default" size="100%">0-262-19534-8</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;div class=&quot;page&quot; title=&quot;Page 1&quot;&gt;
&lt;div class=&quot;layoutArea&quot;&gt;
&lt;div class=&quot;column&quot;&gt;
&lt;p&gt;&lt;span style=&quot;font-size: 10.000000pt; font-family: 'Times';&quot;&gt;During the last ten years there has been growing interest in the develop- ment of Brain Computer Interfaces (BCIs). The field has mainly been driven by the needs of completely paralyzed patients to communicate. With a few exceptions, most human BCIs are based on extracranial elec- troencephalography (EEG). However, reported bit rates are still low. One reason for this is the low signal-to-noise ratio of the EEG [16]. We are currently investigating if BCIs based on electrocorticography (ECoG) are a viable alternative. In this paper we present the method and examples of intracranial EEG recordings of three epilepsy patients with electrode grids placed on the motor cortex. The patients were asked to repeat- edly imagine movements of two kinds, e.g., tongue or finger movements. We analyze the classifiability of the data using Support Vector Machines (SVMs) [18, 21] and Recursive Channel Elimination (RCE) [11].&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Schröder, Michael</style></author><author><style face="normal" font="default" size="100%">Lal, T.N</style></author><author><style face="normal" font="default" size="100%">Hinterberger, T.</style></author><author><style face="normal" font="default" size="100%">Bogdan, Martin</style></author><author><style face="normal" font="default" size="100%">Jeremy Jeremy Hill</style></author><author><style face="normal" font="default" size="100%">Niels Birbaumer</style></author><author><style face="normal" font="default" size="100%">Rosenstiel, W.</style></author><author><style face="normal" font="default" size="100%">Schölkopf, B</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Vesin J M, T EbrahimiEditor</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">Robust EEG Channel Selection across Subjects for Brain-Computer Interfaces.</style></title><secondary-title><style face="normal" font="default" size="100%">EURASIP Journal on Advances in Signal Processing</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">brain-computer interface</style></keyword><keyword><style  face="normal" font="default" size="100%">channel selection</style></keyword><keyword><style  face="normal" font="default" size="100%">Electroencephalography</style></keyword><keyword><style  face="normal" font="default" size="100%">feature selection</style></keyword><keyword><style  face="normal" font="default" size="100%">recursive channel elimination</style></keyword><keyword><style  face="normal" font="default" size="100%">support vector machine</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2005</style></year><pub-dates><date><style  face="normal" font="default" size="100%">01/2005</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.researchgate.net/publication/26532072_Robust_EEG_Channel_Selection_across_Subjects_for_Brain-Computer_Interfaces</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">2005</style></volume><pages><style face="normal" font="default" size="100%">3103–3112</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Most EEG-based brain-computer interface (BCI) paradigms come along with specific electrode positions, for example, for a visual-based BCI, electrode positions close to the primary visual cortex are used. For new BCI paradigms it is usually not known where task relevant activity can be measured from the scalp. For individual subjects, Lal et al. in 2004 showed that recording positions can be found without the use of prior knowledge about the paradigm used. However it remains unclear to what extent their method of recursive channel elimination (RCE) can be generalized across subjects. In this paper we transfer channel rankings from a group of subjects to a new subject. For motor imagery tasks the results are promising, although cross-subject channel selection does not quite achieve the performance of channel selection on data of single subjects. Although the RCE method was not provided with prior knowledge about the mental task, channels that are well known to be important (from a physiological point of view) were consistently selected whereas task-irrelevant channels were reliably disregarded.&lt;em&gt;&lt;br /&gt;&lt;/em&gt;&lt;/p&gt;</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Jeremy Jeremy Hill</style></author><author><style face="normal" font="default" size="100%">Lal, T.N</style></author><author><style face="normal" font="default" size="100%">Bierig, K.</style></author><author><style face="normal" font="default" size="100%">Niels Birbaumer</style></author><author><style face="normal" font="default" size="100%">Schölkopf, B</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">An Auditory Paradigm for Brain–Computer Interfaces.</style></title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Brain Computer Interfaces</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2004</style></year><pub-dates><date><style  face="normal" font="default" size="100%">2004</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://papers.nips.cc/paper/2551-an-auditory-paradigm-for-brain-computer-interfaces</style></url></web-urls></urls><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;div class=&quot;page&quot; title=&quot;Page 1&quot;&gt;
&lt;div class=&quot;layoutArea&quot;&gt;
&lt;div class=&quot;column&quot;&gt;
&lt;p&gt;&lt;span style=&quot;font-size: 10.000000pt; font-family: 'CMR10';&quot;&gt;Motivated by the particular problems involved in communicating with “locked-in” paralysed patients, we aim to develop a brain- computer interface that uses auditory stimuli. We describe a paradigm that allows a user to make a binary decision by focusing attention on one of two concurrent auditory stimulus sequences. Using Support Vector Machine classification and Recursive Chan- nel Elimination on the independent components of averaged event- related potentials, we show that an untrained user’s EEG data can be classified with an encouragingly high level of accuracy. This suggests that it is possible for users to modulate EEG signals in a single trial by the conscious direction of attention, well enough to be useful in BCI.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;</style></abstract></record></records></xml>