02272nas a2200277 4500008004100000245008600041210006900127260001200196520140600208653003301614653002901647653002001676653000901696653002501705653002401730653002001754653002201774100001401796700001401810700002401824700001501848700001901863700001901882700001601901856007701917 2015 eng d00aBrain-to-text: Decoding spoken sentences from phone representations in the brain.0 aBraintotext Decoding spoken sentences from phone representations c06/20153 aIt has long been speculated whether communication between humans and machines based on natural speech related cortical activity is possible. Over the past decade, studies have suggested that it is feasible to recognize isolated aspects of speech from neural signals, such as auditory features, phones or one of a few isolated words. However, until now it remained an unsolved challenge to decode continuously spoken speech from the neural substrate associated with speech and language processing. Here, we show for the first time that continuously spoken speech can be decoded into the expressed words from intracranial electrocorticographic (ECoG) recordings.Specifically, we implemented a system, which we call Brain-To-Text that models single phones, employs techniques from automatic speech recognition (ASR), and thereby transforms brain activity while speaking into the corresponding textual representation. Our results demonstrate that our system can achieve word error rates as low as 25% and phone error rates below 50%. Additionally, our approach contributes to the current understanding of the neural basis of continuous speech production by identifying those cortical regions that hold substantial information about individual phones. In conclusion, the Brain-To- Text system described in this paper represents an important step toward human-machine communication based on imagined speech.10aautomatic speech recognition10abrain-computer interface10abroadband gamma10aECoG10aElectrocorticography10apattern recognition10aspeech decoding10aspeech production1 aHerff, C.1 aHeger, D.1 ade Pesters, Adriana1 aTelaar, D.1 aBrunner, Peter1 aSchalk, Gerwin1 aSchultz, T. uhttp://journal.frontiersin.org/article/10.3389/fnins.2015.00217/abstract01857nas a2200421 4500008004100000022001400041245008900055210006900144260001200213300001100225490000700236520059500243653001800838653002900856653003500885653002500920653002300945653004300968653003201011653002101043653002201064653001801086100001801104700001901122700002001141700001701161700002401178700001901202700002101221700002001242700002301262700002201285700001601307700002401323700002101347700001901368856004801387 2014 eng d a1525-506900aProceedings of the Fifth International Workshop on Advances in Electrocorticography.0 aProceedings of the Fifth International Workshop on Advances in E c12/2014 a183-920 v413 a
The Fifth International Workshop on Advances in Electrocorticography convened in San Diego, CA, on November 7-8, 2013. Advancements in methodology, implementation, and commercialization across both research and in the interval year since the last workshop were the focus of the gathering. Electrocorticography (ECoG) is now firmly established as a preferred signal source for advanced research in functional, cognitive, and neuroprosthetic domains. Published output in ECoG fields has increased tenfold in the past decade. These proceedings attempt to summarize the state of the art.
10aBrain Mapping10abrain-computer interface10aelectrical stimulation mapping10aElectrocorticography10afunctional mapping10aGamma-frequency electroencephalography10aHigh-frequency oscillations10aNeuroprosthetics10aSeizure detection10aSubdural grid1 aRitaccio, A L1 aBrunner, Peter1 aGunduz, Aysegul1 aHermes, Dora1 aHirsch, Lawrence, J1 aJacobs, Joshua1 aKamada, Kyousuke1 aKastner, Sabine1 aKnight, Robert, T.1 aLesser, Ronald, P1 aMiller, Kai1 aSejnowski, Terrence1 aWorrell, Gregory1 aSchalk, Gerwin uhttp://www.ncbi.nlm.nih.gov/pubmed/2546121301889nas a2200469 4500008004100000022001400041245008900055210006900144260001200213300001100225490000700236520052900243653001800772653002900790653002500819653004300844653003100887653002100918653002200939653001800961100001800979700002300997700002001020700001901040700001801059700002201077700002001099700001701119700002301136700002001159700001601179700001801195700002101213700001901234700001701253700001901270700002201289700002001311700002101331700001901352856004801371 2012 eng d a1525-506900aProceedings of the Third International Workshop on Advances in Electrocorticography.0 aProceedings of the Third International Workshop on Advances in E c12/2012 a605-130 v253 aThe Third International Workshop on Advances in Electrocorticography (ECoG) was convened in Washington, DC, on November 10-11, 2011. As in prior meetings, a true multidisciplinary fusion of clinicians, scientists, and engineers from many disciplines gathered to summarize contemporary experiences in brain surface recordings. The proceedings of this meeting serve as evidence of a very robust and transformative field but will yet again require revision to incorporate the advances that the following year will surely bring.10aBrain Mapping10abrain-computer interface10aElectrocorticography10aGamma-frequency electroencephalography10ahigh-frequency oscillation10aNeuroprosthetics10aSeizure detection10aSubdural grid1 aRitaccio, A L1 aBeauchamp, Michael1 aBosman, Conrado1 aBrunner, Peter1 aChang, Edward1 aCrone, Nathan, E.1 aGunduz, Aysegul1 aGupta, Disha1 aKnight, Robert, T.1 aLeuthardt, Eric1 aLitt, Brian1 aMoran, Daniel1 aOjemann, Jeffrey1 aParvizi, Josef1 aRamsey, Nick1 aRieger, Jochem1 aViventi, Jonathan1 aVoytek, Bradley1 aWilliams, Justin1 aSchalk, Gerwin uhttp://www.ncbi.nlm.nih.gov/pubmed/2316009603409nas a2200253 4500008004100000022001400041245009700055210006900152260001200221300000600233490000600239520266600245653002902911653002502940653002802965653000902993653001203002100001903014700001803033700002203051700001503073700001903088856004803107 2011 eng d a1662-453X00aRapid Communication with a "P300" Matrix Speller Using Electrocorticographic Signals (ECoG).0 aRapid Communication with a P300 Matrix Speller Using Electrocort c02/2011 a50 v53 aA brain-computer interface (BCI) can provide a non-muscular communication channel to severely disabled people. One particular realization of a BCI is the P300 matrix speller that was originally described by Farwell and Donchin (1988). This speller uses event-related potentials (ERPs) that include the P300 ERP. All previous online studies of the P300 matrix speller used scalp-recorded electroencephalography (EEG) and were limited in their communication performance to only a few characters per minute. In our study, we investigated the feasibility of using electrocorticographic (ECoG) signals for online operation of the matrix speller, and determined associated spelling rates. We used the matrix speller that is implemented in the BCI2000 system. This speller used ECoG signals that were recorded from frontal, parietal, and occipital areas in one subject. This subject spelled a total of 444 characters in online experiments. The results showed that the subject sustained a rate of 17 characters/min (i.e., 69 bits/min), and achieved a peak rate of 22 characters/min (i.e., 113 bits/min). Detailed analysis of the results suggests that ERPs over visual areas (i.e., visual evoked potentials) contribute significantly to the performance of the matrix speller BCI system. Our results also point to potential reasons for the apparent advantages in spelling performance of ECoG compared to EEG. Thus, with additional verification in more subjects, these results may further extend the communication options for people with serious neuromuscular disabilities.
10abrain-computer interface10aElectrocorticography10aevent-related potential10aP30010aspeller1 aBrunner, Peter1 aRitaccio, A L1 aEmrich, Joseph, F1 aBischof, H1 aSchalk, Gerwin uhttp://www.ncbi.nlm.nih.gov/pubmed/2136935101207nas a2200181 4500008004100000020002200041245003200063210003000095260001900125520069300144653000800837653002900845653002300874653002200897100001900919700001900938856006800957 2009 eng d a978-3-642-02811-300aBrain-Computer Interaction.0 aBrainComputer Interaction bSpringerc20093 aDetection and automated interpretation of attention-related or intention-related brain activity carries significant promise for many military and civilian applications. This interpretation of brain activity could provide information about a person’s intended movements, imagined movements, or attentional focus, and thus could be valuable for optimizing or replacing traditional motor-based communication between a person and a computer or other output devices. We describe here the objective and preliminary results of our studies in this area.
10aBCI10abrain-computer interface10aneural engineering10aneural prosthesis1 aBrunner, Peter1 aSchalk, Gerwin uhttp://link.springer.com/chapter/10.1007%2F978-3-642-02812-0_8101741nas a2200253 4500008004100000022001400041245008200055210006900137260001200206300001400218490000700232520097500239653000801214653002901222653002201251653002401273100003201297700002601329700002001355700002401375700001901399700002101418856004801439 2009 eng d a1873-624600aA scanning protocol for a sensorimotor rhythm-based brain-computer interface.0 ascanning protocol for a sensorimotor rhythmbased braincomputer i c02/2009 a169–1750 v803 aThe scanning protocol is a novel brain-computer interface (BCI) implementation that can be controlled with sensorimotor rhythms (SMRs) of the electroencephalogram (EEG). The user views a screen that shows four choices in a linear array with one marked as target. The four choices are successively highlighted for 2.5s each. When a target is highlighted, the user can select it by modulating the SMR. An advantage of this method is the capacity to choose among multiple choices with just one learned SMR modulation. Each of 10 naive users trained for ten 30 min sessions over 5 weeks. User performance improved significantly (p<0.001) over the sessions and ranged from 30 to 80% mean accuracy of the last three sessions (chance accuracy=25%). The incidence of correct selections depended on the target position. These results suggest that, with further improvements, a scanning protocol can be effective. The ultimate goal is to expand it to a large matrix of selections.10aBCI10abrain-computer interface10ascanning protocol10asensorimotor rhythm1 aFriedrich, Elisabeth, V. C.1 aMcFarland, Dennis, J.1 aNeuper, Christa1 aVaughan, Theresa, M1 aBrunner, Peter1 aWolpaw, Jonathan uhttp://www.ncbi.nlm.nih.gov/pubmed/1878660302473nas a2200289 4500008004100000022001400041245007500055210006900130260001200199300001600211490000800227520163800235653002901873653002801902653000801930653002801938653000901966653001901975100002101994700002402015700001902039700002102058700002102079700001502100700002002115856004802135 2009 eng d a1872-895200aToward a high-throughput auditory P300-based brain-computer interface.0 aToward a highthroughput auditory P300based braincomputer interfa c07/2009 a1252–12610 v1203 aOBJECTIVE: Brain-computer interface (BCI) technology can provide severely disabled people with non-muscular communication. For those most severely disabled, limitations in eye mobility or visual acuity may necessitate auditory BCI systems. The present study investigates the efficacy of the use of six environmental sounds to operate a 6x6 P300 Speller. METHODS: A two-group design was used to ascertain whether participants benefited from visual cues early in training. Group A (N=5) received only auditory stimuli during all 11 sessions, whereas Group AV (N=5) received simultaneous auditory and visual stimuli in initial sessions after which the visual stimuli were systematically removed. Stepwise linear discriminant analysis determined the matrix item that elicited the largest P300 response and thereby identified the desired choice. RESULTS: Online results and offline analyses showed that the two groups achieved equivalent accuracy. In the last session, eight of 10 participants achieved 50% or more, and four of these achieved 75% or more, online accuracy (2.8% accuracy expected by chance). Mean bit rates averaged about 2 bits/min, and maximum bit rates reached 5.6 bits/min. CONCLUSIONS: This study indicates that an auditory P300 BCI is feasible, that reasonable classification accuracy and rate of communication are achievable, and that the paradigm should be further evaluated with a group of severely disabled participants who have limited visual mobility. SIGNIFICANCE: With further development, this auditory P300 BCI could be of substantial value to severely disabled people who cannot use a visual BCI.10abrain-computer interface10abrain-machine interface10aEEG10aevent-related potential10aP30010aRehabilitation1 aKlobassa, D., S.1 aVaughan, Theresa, M1 aBrunner, Peter1 aSchwartz, N., E.1 aWolpaw, Jonathan1 aNeuper, C.1 aSellers, E., W. uhttp://www.ncbi.nlm.nih.gov/pubmed/19574091