<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>10</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Zuoguan Wang</style></author><author><style face="normal" font="default" size="100%">Gerwin Schalk</style></author><author><style face="normal" font="default" size="100%">Ji, Q</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Anatomically Constrained Decoding of Finger Flexion from Electrocorticographic Signals</style></title><secondary-title><style face="normal" font="default" size="100%">NIPS</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2011</style></year></dates><language><style face="normal" font="default" size="100%">eng</style></language></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Zuoguan Wang</style></author><author><style face="normal" font="default" size="100%">Ji, Q</style></author><author><style face="normal" font="default" size="100%">Miller, John W</style></author><author><style face="normal" font="default" size="100%">Gerwin Schalk</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Prior knowledge improves decoding of finger flexion from electrocorticographic signals.</style></title><secondary-title><style face="normal" font="default" size="100%">Front Neurosci</style></secondary-title><alt-title><style face="normal" font="default" size="100%">Front Neurosci</style></alt-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">brain-computer interface</style></keyword><keyword><style  face="normal" font="default" size="100%">decoding algorithm</style></keyword><keyword><style  face="normal" font="default" size="100%">electrocorticographic</style></keyword><keyword><style  face="normal" font="default" size="100%">finger flexion</style></keyword><keyword><style  face="normal" font="default" size="100%">machine learning</style></keyword><keyword><style  face="normal" font="default" size="100%">prior knowledge</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2011</style></year><pub-dates><date><style  face="normal" font="default" size="100%">11/2011</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/22144944</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">5</style></volume><pages><style face="normal" font="default" size="100%">127</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;Brain-computer interfaces (BCIs) use brain signals to convey a user's intent. Some BCI approaches begin by decoding kinematic parameters of movements from brain signals, and then proceed to using these signals, in absence of movements, to allow a user to control an output. Recent results have shown that electrocorticographic (ECoG) recordings from the surface of the brain in humans can give information about kinematic parameters (e.g., hand velocity or finger flexion). The decoding approaches in these studies usually employed classical classification/regression algorithms that derive a linear mapping between brain signals and outputs. However, they typically only incorporate little prior information about the target movement parameter. In this paper, we incorporate prior knowledge using a Bayesian decoding method, and use it to decode finger flexion from ECoG signals. Specifically, we exploit the constraints that govern finger flexion and incorporate these constraints in the construction, structure, and the probabilistic functions of the prior model of a switched non-parametric dynamic system (SNDS). Given a measurement model resulting from a traditional linear regression method, we decoded finger flexion using posterior estimation that combined the prior and measurement models. Our results show that the application of the Bayesian decoding model, which incorporates prior knowledge, improves decoding performance compared to the application of a linear regression model, which does not incorporate prior knowledge. Thus, the results presented in this paper may ultimately lead to neurally controlled hand prostheses with full fine-grained finger articulation.&lt;/span&gt;&lt;/p&gt;</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>10</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Zuoguan Wang</style></author><author><style face="normal" font="default" size="100%">Ji, Q</style></author><author><style face="normal" font="default" size="100%">Kai J. Miller</style></author><author><style face="normal" font="default" size="100%">Gerwin Schalk</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Decoding finger flexion from electrocorticographic signals using sparse Gaussian process.</style></title><secondary-title><style face="normal" font="default" size="100%">International Conference on Pattern Recognition - ICPR</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2010</style></year></dates><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">A brain-computer interface (BCI) creates a direct communication pathway between the brain and an external device, and can thereby restore function in people with severe motor disabilities. A core component in a BCI system is the decoding algorithm that translates brain signals into action commands of an output device. Most of current decoding algorithms are based on linear models (e.g., derived using linear regression) that may have important shortcomings. The use of nonlinear models (e.g., neural networks) could overcome some of these shortcomings, but has difficulties with high dimensional feature spaces. Here we propose another decoding algorithm that is based on the sparse gaussian process with pseudo-inputs (SPGP). As a nonparametric method, it can model more complex relationships compared to linear methods. As a kernel method, it can readily deal with high dimensional feature space. The evaluations shown in this paper demonstrate that SPGP can decode the flexion of finger movements from electrocorticographic (ECoG) signals more accurately than a previously described algorithm that used a linear model. In addition, by formulating problems in the bayesian probabilistic framework, SPGP can provide estimation of the prediction uncertainty. Furthermore, the trained SPGP offers a very effective way for identifying important features.</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Pei, Xiao-Mei</style></author><author><style face="normal" font="default" size="100%">Zheng, Shi Dong</style></author><author><style face="normal" font="default" size="100%">Xu, Jin</style></author><author><style face="normal" font="default" size="100%">Bin, Guang-yu</style></author><author><style face="normal" font="default" size="100%">Zuoguan Wang</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Multi-channel linear descriptors for event-related EEG collected in brain computer interface.</style></title><secondary-title><style face="normal" font="default" size="100%">J Neural Eng</style></secondary-title><alt-title><style face="normal" font="default" size="100%">J Neural Eng</style></alt-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Algorithms</style></keyword><keyword><style  face="normal" font="default" size="100%">Electroencephalography</style></keyword><keyword><style  face="normal" font="default" size="100%">Evoked Potentials, Motor</style></keyword><keyword><style  face="normal" font="default" size="100%">Humans</style></keyword><keyword><style  face="normal" font="default" size="100%">Imagination</style></keyword><keyword><style  face="normal" font="default" size="100%">Motor Cortex</style></keyword><keyword><style  face="normal" font="default" size="100%">Movement</style></keyword><keyword><style  face="normal" font="default" size="100%">Pattern Recognition, Automated</style></keyword><keyword><style  face="normal" font="default" size="100%">Reproducibility of Results</style></keyword><keyword><style  face="normal" font="default" size="100%">Sensitivity and Specificity</style></keyword><keyword><style  face="normal" font="default" size="100%">User-Computer Interface</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2006</style></year><pub-dates><date><style  face="normal" font="default" size="100%">03/2006</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/16510942</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">3</style></volume><pages><style face="normal" font="default" size="100%">52-8</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;By three multi-channel linear descriptors, i.e. spatial&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;complexity&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;(omega), field power (sigma) and frequency of field changes (phi),&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;event-related&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;EEG&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;data&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;within 8-30 Hz were investigated during imagination of left or right hand movement. Studies on the&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;event-related&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;EEG&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;data&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;indicate that a two-channel version of omega, sigma and phi could reflect the antagonistic ERD/ERS patterns over contralateral and ipsilateral areas and also characterize different phases of the changing brain states in the&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;event-related&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;paradigm. Based on the selective two-channel linear descriptors, the left and right hand motor imagery tasks are classified to obtain satisfactory results, which testify the validity of the three linear descriptors omega, sigma and phi for characterizing&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;event-related&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;EEG&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;. The preliminary results show that omega, sigma together with phi have good separability for left and right hand motor imagery tasks, which could be considered for classification of two classes of&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;EEG&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;patterns in the application of brain computer interfaces.&lt;/span&gt;&lt;/p&gt;</style></abstract><issue><style face="normal" font="default" size="100%">1</style></issue></record></records></xml>