%0 Journal Article %J Scientific reports %D 2016 %T Word pair classification during imagined speech using direct brain recordings. %A Martin, Stéphanie %A Peter Brunner %A Iturrate, Iñaki %A Millán, José Del R. %A Gerwin Schalk %A Robert T. Knight %A Pasley, Brian N. %X People that cannot communicate due to neurological disorders would benefit from an internal speech decoder. Here, we showed the ability to classify individual words during imagined speech from electrocorticographic signals. In a word imagery task, we used high gamma (70-150þinspaceHz) time features with a support vector machine model to classify individual words from a pair of words. To account for temporal irregularities during speech production, we introduced a non-linear time alignment into the SVM kernel. Classification accuracy reached 88% in a two-class classification framework (50% chance level), and average classification accuracy across fifteen word-pairs was significant across five subjects (meanþinspace=þinspace58%; pþinspace<þinspace0.05). We also compared classification accuracy between imagined speech, overt speech and listening. As predicted, higher classification accuracy was obtained in the listening and overt speech conditions (meanþinspace=þinspace89% and 86%, respectively; pþinspace<þinspace0.0001), where speech stimuli were directly presented. The results provide evidence for a neural representation for imagined words in the temporal lobe, frontal lobe and sensorimotor cortex, consistent with previous findings in speech perception and production. These data represent a proof of concept study for basic decoding of speech imagery, and delineate a number of key challenges to usage of speech imagery neural representations for clinical applications. %B Scientific reports %V 6 %P 25803 %8 May %G eng %U http://www.ncbi.nlm.nih.gov/pubmed/27165452 %R 10.1038/srep25803 %0 Journal Article %J Frontiers in Neuroengineering %D 2014 %T Decoding spectrotemporal features of overt and covert speech from the human cortex. %A Martin, Stéphanie %A Peter Brunner %A Holdgraf, Chris %A Heinze, Hans-Jochen %A Nathan E. Crone %A Rieger, Jochem %A Gerwin Schalk %A Robert T. Knight %A Pasley, Brian N. %K covert speech %K decoding model %K Electrocorticography %K pattern recognition %K speech production %X Auditory perception and auditory imagery have been shown to activate overlapping brain regions. We hypothesized that these phenomena also share a common underlying neural representation. To assess this, we used electrocorticography intracranial recordings from epileptic patients performing an out loud or a silent reading task. In these tasks, short stories scrolled across a video screen in two conditions: subjects read the same stories both aloud (overt) and silently (covert). In a control condition the subject remained in a resting state. We first built a high gamma (70–150 Hz) neural decoding model to reconstruct spectrotemporal auditory features of self-generated overt speech. We then evaluated whether this same model could reconstruct auditory speech features in the covert speech condition. Two speech models were tested: a spectrogram and a modulation-based feature space. For the overt condition, reconstruction accuracy was evaluated as the correlation between original and predicted speech features, and was significant in each subject (p < 0.00001; paired two-sample t-test). For the covert speech condition, dynamic time warping was first used to realign the covert speech reconstruction with the corresponding original speech from the overt condition. Reconstruction accuracy was then evaluated as the correlation between original and reconstructed speech features. Covert reconstruction accuracy was compared to the accuracy obtained from reconstructions in the baseline control condition. Reconstruction accuracy for the covert condition was significantly better than for the control condition (p < 0.005; paired two-sample t-test). The superior temporal gyrus, pre- and post-central gyrus provided the highest reconstruction information. The relationship between overt and covert speech reconstruction depended on anatomy. These results provide evidence that auditory representations of covert speech can be reconstructed from models that are built from an overt speech data set, supporting a partially shared neural substrate. %B Frontiers in Neuroengineering %V 7 %8 03/2014 %G eng %U http://www.ncbi.nlm.nih.gov/pubmed/24904404 %N 14 %R 10.3389/fneng.2014.00014