<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Martens, S M M</style></author><author><style face="normal" font="default" size="100%">Mooij, J M</style></author><author><style face="normal" font="default" size="100%">Jeremy Jeremy Hill</style></author><author><style face="normal" font="default" size="100%">Farquhar, Jason</style></author><author><style face="normal" font="default" size="100%">Schölkopf, B</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">A graphical model framework for decoding in the visual ERP-based BCI speller.</style></title><secondary-title><style face="normal" font="default" size="100%">Neural Comput</style></secondary-title><alt-title><style face="normal" font="default" size="100%">Neural Comput</style></alt-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Artificial Intelligence</style></keyword><keyword><style  face="normal" font="default" size="100%">Computer User Training</style></keyword><keyword><style  face="normal" font="default" size="100%">Discrimination Learning</style></keyword><keyword><style  face="normal" font="default" size="100%">Electroencephalography</style></keyword><keyword><style  face="normal" font="default" size="100%">Evoked Potentials</style></keyword><keyword><style  face="normal" font="default" size="100%">Evoked Potentials, Visual</style></keyword><keyword><style  face="normal" font="default" size="100%">Humans</style></keyword><keyword><style  face="normal" font="default" size="100%">Language</style></keyword><keyword><style  face="normal" font="default" size="100%">Models, Neurological</style></keyword><keyword><style  face="normal" font="default" size="100%">Models, Theoretical</style></keyword><keyword><style  face="normal" font="default" size="100%">Reading</style></keyword><keyword><style  face="normal" font="default" size="100%">Signal Processing, Computer-Assisted</style></keyword><keyword><style  face="normal" font="default" size="100%">User-Computer Interface</style></keyword><keyword><style  face="normal" font="default" size="100%">Visual Cortex</style></keyword><keyword><style  face="normal" font="default" size="100%">Visual Perception</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2011</style></year><pub-dates><date><style  face="normal" font="default" size="100%">01/2011</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/20964540</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">23</style></volume><pages><style face="normal" font="default" size="100%">160-82</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;We present a graphical model framework for decoding in the visual ERP-based speller system. The proposed framework allows researchers to build generative models from which the decoding rules are obtained in a straightforward manner. We suggest two models for generating&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;brain&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;signals conditioned on the stimulus events. Both models incorporate letter frequency information but assume different dependencies between&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;brain&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;signals and stimulus events. For both models, we derive decoding rules and perform a discriminative training. We show on real visual speller data how decoding performance improves by incorporating letter frequency information and using a more realistic graphical model for the dependencies between the&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;brain&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;signals and the stimulus events. Furthermore, we discuss how the standard approach to decoding can be seen as a special case of the graphical model framework. The letter also gives more insight into the discriminative approach for decoding in the visual speller system.&lt;/span&gt;&lt;/p&gt;</style></abstract><issue><style face="normal" font="default" size="100%">1</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Murguialday, A Ramos</style></author><author><style face="normal" font="default" size="100%">Jeremy Jeremy Hill</style></author><author><style face="normal" font="default" size="100%">Bensch, M</style></author><author><style face="normal" font="default" size="100%">Martens, S M M</style></author><author><style face="normal" font="default" size="100%">S Halder</style></author><author><style face="normal" font="default" size="100%">Nijboer, F</style></author><author><style face="normal" font="default" size="100%">Schoelkopf, Bernhard</style></author><author><style face="normal" font="default" size="100%">Niels Birbaumer</style></author><author><style face="normal" font="default" size="100%">Gharabaghi, A</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Transition from the locked in to the completely locked-in state: a physiological analysis.</style></title><secondary-title><style face="normal" font="default" size="100%">Clin Neurophysiol</style></secondary-title><alt-title><style face="normal" font="default" size="100%">Clin Neurophysiol</style></alt-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Adult</style></keyword><keyword><style  face="normal" font="default" size="100%">Amyotrophic Lateral Sclerosis</style></keyword><keyword><style  face="normal" font="default" size="100%">Area Under Curve</style></keyword><keyword><style  face="normal" font="default" size="100%">Brain</style></keyword><keyword><style  face="normal" font="default" size="100%">Communication Aids for Disabled</style></keyword><keyword><style  face="normal" font="default" size="100%">Disease Progression</style></keyword><keyword><style  face="normal" font="default" size="100%">Electroencephalography</style></keyword><keyword><style  face="normal" font="default" size="100%">Electromyography</style></keyword><keyword><style  face="normal" font="default" size="100%">Humans</style></keyword><keyword><style  face="normal" font="default" size="100%">Male</style></keyword><keyword><style  face="normal" font="default" size="100%">Signal Processing, Computer-Assisted</style></keyword><keyword><style  face="normal" font="default" size="100%">User-Computer Interface</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2011</style></year><pub-dates><date><style  face="normal" font="default" size="100%">06/2011</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/20888292</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">122</style></volume><pages><style face="normal" font="default" size="100%">925-33</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;h4 style=&quot;font-size: 13px; margin: 0px 0.25em 0px 0px; text-transform: uppercase; float: left; font-family: arial, helvetica, clean, sans-serif; line-height: 17px;&quot;&gt;OBJECTIVE:&amp;nbsp;&lt;/h4&gt;
&lt;p style=&quot;margin: 0px 0px 0.5em; font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;To clarify the physiological and behavioral boundaries between locked-in (LIS) and the completely locked-in state (CLIS) (no voluntary eye movements, no communication possible) through electrophysiological data and to secure&amp;nbsp;&lt;span class=&quot;highlight&quot;&gt;brain-computer-interface&lt;/span&gt;&amp;nbsp;(&lt;span class=&quot;highlight&quot;&gt;BCI&lt;/span&gt;) communication.&lt;/p&gt;
&lt;h4 style=&quot;font-size: 13px; margin: 0px 0.25em 0px 0px; text-transform: uppercase; float: left; font-family: arial, helvetica, clean, sans-serif; line-height: 17px;&quot;&gt;METHODS:&amp;nbsp;&lt;/h4&gt;
&lt;p style=&quot;margin: 0px 0px 0.5em; font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;Electromyography from facial muscles, external anal sphincter (EAS), electrooculography and electrocorticographic data during different psychophysiological tests were acquired to define electrophysiological differences in an amyotrophic lateral sclerosis (ALS) patient with an intracranially implanted grid of 112 electrodes for nine months while the patient passed from the LIS to the CLIS.&lt;/p&gt;
&lt;h4 style=&quot;font-size: 13px; margin: 0px 0.25em 0px 0px; text-transform: uppercase; float: left; font-family: arial, helvetica, clean, sans-serif; line-height: 17px;&quot;&gt;RESULTS:&amp;nbsp;&lt;/h4&gt;
&lt;p style=&quot;margin: 0px 0px 0.5em; font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;At the very end of the LIS there was no facial muscle activity, nor external anal sphincter but eye control. Eye movements were slow and lasted for short periods only. During CLIS event related&amp;nbsp;&lt;span class=&quot;highlight&quot;&gt;brain&lt;/span&gt;potentials (ERP) to passive limb movements and auditory stimuli were recorded, vibrotactile stimulation of different body parts resulted in no ERP response.&lt;/p&gt;
&lt;h4 style=&quot;font-size: 13px; margin: 0px 0.25em 0px 0px; text-transform: uppercase; float: left; font-family: arial, helvetica, clean, sans-serif; line-height: 17px;&quot;&gt;CONCLUSIONS:&amp;nbsp;&lt;/h4&gt;
&lt;p style=&quot;margin: 0px 0px 0.5em; font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;The results presented contradict the commonly accepted assumption that the EAS is the last remaining muscle under voluntary control and demonstrate complete loss of eye movements in CLIS. The eye muscle was shown to be the last muscle group under voluntary control. The findings suggest ALS as a multisystem disorder, even affecting afferent sensory pathways.&lt;/p&gt;
&lt;h4 style=&quot;font-size: 13px; margin: 0px 0.25em 0px 0px; text-transform: uppercase; float: left; font-family: arial, helvetica, clean, sans-serif; line-height: 17px;&quot;&gt;SIGNIFICANCE:&amp;nbsp;&lt;/h4&gt;
&lt;p style=&quot;margin: 0px 0px 0.5em; font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;Auditory and proprioceptive&amp;nbsp;&lt;span class=&quot;highlight&quot;&gt;brain-computer-interface&lt;/span&gt;&amp;nbsp;(&lt;span class=&quot;highlight&quot;&gt;BCI&lt;/span&gt;) systems are the only remaining communication channels in CLIS.&lt;/p&gt;</style></abstract><issue><style face="normal" font="default" size="100%">5</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Martens, S M M</style></author><author><style face="normal" font="default" size="100%">Jeremy Jeremy Hill</style></author><author><style face="normal" font="default" size="100%">Farquhar, Jason</style></author><author><style face="normal" font="default" size="100%">Schölkopf, B</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Overlap and refractory effects in a brain-computer interface speller based on the visual P300 event-related potential.</style></title><secondary-title><style face="normal" font="default" size="100%">J Neural Eng</style></secondary-title><alt-title><style face="normal" font="default" size="100%">J Neural Eng</style></alt-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Algorithms</style></keyword><keyword><style  face="normal" font="default" size="100%">Brain</style></keyword><keyword><style  face="normal" font="default" size="100%">Cognition</style></keyword><keyword><style  face="normal" font="default" size="100%">Computer Simulation</style></keyword><keyword><style  face="normal" font="default" size="100%">Electroencephalography</style></keyword><keyword><style  face="normal" font="default" size="100%">Event-Related Potentials, P300</style></keyword><keyword><style  face="normal" font="default" size="100%">Humans</style></keyword><keyword><style  face="normal" font="default" size="100%">Models, Neurological</style></keyword><keyword><style  face="normal" font="default" size="100%">Pattern Recognition, Automated</style></keyword><keyword><style  face="normal" font="default" size="100%">Photic Stimulation</style></keyword><keyword><style  face="normal" font="default" size="100%">Semantics</style></keyword><keyword><style  face="normal" font="default" size="100%">Signal Processing, Computer-Assisted</style></keyword><keyword><style  face="normal" font="default" size="100%">Task Performance and Analysis</style></keyword><keyword><style  face="normal" font="default" size="100%">User-Computer Interface</style></keyword><keyword><style  face="normal" font="default" size="100%">Writing</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2009</style></year><pub-dates><date><style  face="normal" font="default" size="100%">04/2009</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/19255462</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">6</style></volume><pages><style face="normal" font="default" size="100%">026003</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;We reveal the presence of refractory and overlap effects in the event-related potentials in visual P300 speller datasets, and we show their negative impact on the performance of the system. This finding has important implications for how to encode the letters that can be selected for communication. However, we show that such effects are dependent on stimulus parameters: an alternative stimulus type based on apparent motion suffers less from the refractory effects and leads to an improved letter prediction performance.&lt;/span&gt;&lt;/p&gt;</style></abstract><issue><style face="normal" font="default" size="100%">2</style></issue></record></records></xml>