<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Dennis J. McFarland</style></author><author><style face="normal" font="default" size="100%">Cacace, Anthony T</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Modality specificity is the preferred method for diagnosing the auditory processing disorder (APD): response to Moore and Ferguson.</style></title><secondary-title><style face="normal" font="default" size="100%">J Am Acad Audiol</style></secondary-title><alt-title><style face="normal" font="default" size="100%">J Am Acad Audiol</style></alt-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Auditory Perception</style></keyword><keyword><style  face="normal" font="default" size="100%">Auditory Perceptual Disorders</style></keyword><keyword><style  face="normal" font="default" size="100%">Evoked Potentials, Auditory, Brain Stem</style></keyword><keyword><style  face="normal" font="default" size="100%">Humans</style></keyword><keyword><style  face="normal" font="default" size="100%">Neuropsychological Tests</style></keyword><keyword><style  face="normal" font="default" size="100%">Psychoacoustics</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2014</style></year><pub-dates><date><style  face="normal" font="default" size="100%">08/2014</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/25365373</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">25</style></volume><pages><style face="normal" font="default" size="100%">698-9</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><issue><style face="normal" font="default" size="100%">7</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Cacace, Anthony T</style></author><author><style face="normal" font="default" size="100%">Dennis J. McFarland</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Modality Specificity trumps other methods for diagnosing the auditory processing disorder (APD): response to Dillon et al.</style></title><secondary-title><style face="normal" font="default" size="100%">J Am Acad Audiol</style></secondary-title><alt-title><style face="normal" font="default" size="100%">J Am Acad Audiol</style></alt-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Auditory Perception</style></keyword><keyword><style  face="normal" font="default" size="100%">Auditory Perceptual Disorders</style></keyword><keyword><style  face="normal" font="default" size="100%">Evoked Potentials, Auditory, Brain Stem</style></keyword><keyword><style  face="normal" font="default" size="100%">Humans</style></keyword><keyword><style  face="normal" font="default" size="100%">Neuropsychological Tests</style></keyword><keyword><style  face="normal" font="default" size="100%">Psychoacoustics</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2014</style></year><pub-dates><date><style  face="normal" font="default" size="100%">08/2014</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/25365375</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">25</style></volume><pages><style face="normal" font="default" size="100%">703-5</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><issue><style face="normal" font="default" size="100%">7</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Jeremy Jeremy Hill</style></author><author><style face="normal" font="default" size="100%">Ricci, Erin</style></author><author><style face="normal" font="default" size="100%">Haider, Sameah</style></author><author><style face="normal" font="default" size="100%">McCane, Lynn M</style></author><author><style face="normal" font="default" size="100%">Susan M Heckman</style></author><author><style face="normal" font="default" size="100%">Jonathan Wolpaw</style></author><author><style face="normal" font="default" size="100%">Theresa M Vaughan</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">A practical, intuitive brain-computer interface for communicating 'yes' or 'no' by listening.</style></title><secondary-title><style face="normal" font="default" size="100%">J Neural Eng</style></secondary-title><alt-title><style face="normal" font="default" size="100%">J Neural Eng</style></alt-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Adult</style></keyword><keyword><style  face="normal" font="default" size="100%">Aged</style></keyword><keyword><style  face="normal" font="default" size="100%">Algorithms</style></keyword><keyword><style  face="normal" font="default" size="100%">Auditory Perception</style></keyword><keyword><style  face="normal" font="default" size="100%">brain-computer interfaces</style></keyword><keyword><style  face="normal" font="default" size="100%">Communication Aids for Disabled</style></keyword><keyword><style  face="normal" font="default" size="100%">Electroencephalography</style></keyword><keyword><style  face="normal" font="default" size="100%">Equipment Design</style></keyword><keyword><style  face="normal" font="default" size="100%">Equipment Failure Analysis</style></keyword><keyword><style  face="normal" font="default" size="100%">Female</style></keyword><keyword><style  face="normal" font="default" size="100%">Humans</style></keyword><keyword><style  face="normal" font="default" size="100%">Male</style></keyword><keyword><style  face="normal" font="default" size="100%">Man-Machine Systems</style></keyword><keyword><style  face="normal" font="default" size="100%">Middle Aged</style></keyword><keyword><style  face="normal" font="default" size="100%">Quadriplegia</style></keyword><keyword><style  face="normal" font="default" size="100%">Treatment Outcome</style></keyword><keyword><style  face="normal" font="default" size="100%">User-Computer Interface</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2014</style></year><pub-dates><date><style  face="normal" font="default" size="100%">06/2014</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/24838278</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">11</style></volume><pages><style face="normal" font="default" size="100%">035003</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">OBJECTIVE:
Previous work has shown that it is possible to build an EEG-based binary brain-computer interface system (BCI) driven purely by shifts of attention to auditory stimuli. However, previous studies used abrupt, abstract stimuli that are often perceived as harsh and unpleasant, and whose lack of inherent meaning may make the interface unintuitive and difficult for beginners. We aimed to establish whether we could transition to a system based on more natural, intuitive stimuli (spoken words 'yes' and 'no') without loss of performance, and whether the system could be used by people in the locked-in state.
APPROACH:
We performed a counterbalanced, interleaved within-subject comparison between an auditory streaming BCI that used beep stimuli, and one that used word stimuli. Fourteen healthy volunteers performed two sessions each, on separate days. We also collected preliminary data from two subjects with advanced amyotrophic lateral sclerosis (ALS), who used the word-based system to answer a set of simple yes-no questions.
MAIN RESULTS:
The N1, N2 and P3 event-related potentials elicited by words varied more between subjects than those elicited by beeps. However, the difference between responses to attended and unattended stimuli was more consistent with words than beeps. Healthy subjects' performance with word stimuli (mean 77% ± 3.3 s.e.) was slightly but not significantly better than their performance with beep stimuli (mean 73% ± 2.8 s.e.). The two subjects with ALS used the word-based BCI to answer questions with a level of accuracy similar to that of the healthy subjects.
SIGNIFICANCE:
Since performance using word stimuli was at least as good as performance using beeps, we recommend that auditory streaming BCI systems be built with word stimuli to make the system more pleasant and intuitive. Our preliminary data show that word-based streaming BCI is a promising tool for communication by people who are locked in.</style></abstract><issue><style face="normal" font="default" size="100%">3</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Lawfield, Angela</style></author><author><style face="normal" font="default" size="100%">Dennis J. McFarland</style></author><author><style face="normal" font="default" size="100%">Cacace, Anthony T</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Dichotic and dichoptic digit perception in normal adults.</style></title><secondary-title><style face="normal" font="default" size="100%">J Am Acad Audiol</style></secondary-title><alt-title><style face="normal" font="default" size="100%">J Am Acad Audiol</style></alt-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Adolescent</style></keyword><keyword><style  face="normal" font="default" size="100%">Adult</style></keyword><keyword><style  face="normal" font="default" size="100%">Auditory Perception</style></keyword><keyword><style  face="normal" font="default" size="100%">Dichotic Listening Tests</style></keyword><keyword><style  face="normal" font="default" size="100%">Female</style></keyword><keyword><style  face="normal" font="default" size="100%">Functional Laterality</style></keyword><keyword><style  face="normal" font="default" size="100%">Humans</style></keyword><keyword><style  face="normal" font="default" size="100%">Male</style></keyword><keyword><style  face="normal" font="default" size="100%">Recognition (Psychology)</style></keyword><keyword><style  face="normal" font="default" size="100%">Reference Values</style></keyword><keyword><style  face="normal" font="default" size="100%">Reproducibility of Results</style></keyword><keyword><style  face="normal" font="default" size="100%">Task Performance and Analysis</style></keyword><keyword><style  face="normal" font="default" size="100%">Visual Perception</style></keyword><keyword><style  face="normal" font="default" size="100%">Young Adult</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2011</style></year><pub-dates><date><style  face="normal" font="default" size="100%">06/2011</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/21864471</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">22</style></volume><pages><style face="normal" font="default" size="100%">332-41</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">BACKGROUND:
Verbally based dichotic-listening experiments and reproduction-mediated response-selection strategies have been used for over four decades to study perceptual/cognitive aspects of auditory information processing and make inferences about hemispheric asymmetries and language lateralization in the brain. Test procedures using dichotic digits have also been used to assess for disorders of auditory processing. However, with this application, limitations exist and paradigms need to be developed to improve specificity of the diagnosis. Use of matched tasks in multiple sensory modalities is a logical approach to address this issue. Herein, we use dichotic listening and dichoptic viewing of visually presented digits for making this comparison.
PURPOSE:
To evaluate methodological issues involved in using matched tasks of dichotic listening and dichoptic viewing in normal adults.
RESEARCH DESIGN:
A multivariate assessment of the effects of modality (auditory vs. visual), digit-span length (1-3 pairs), response selection (recognition vs. reproduction), and ear/visual hemifield of presentation (left vs. right) on dichotic and dichoptic digit perception.
STUDY SAMPLE:
Thirty adults (12 males, 18 females) ranging in age from 18 to 30 yr with normal hearing sensitivity and normal or corrected-to-normal visual acuity.
DATA COLLECTION AND ANALYSIS:
A computerized, custom-designed program was used for all data collection and analysis. A four-way repeated measures analysis of variance (ANOVA) evaluated the effects of modality, digit-span length, response selection, and ear/visual field of presentation.
RESULTS:
The ANOVA revealed that performances on dichotic listening and dichoptic viewing tasks were dependent on complex interactions between modality, digit-span length, response selection, and ear/visual hemifield of presentation. Correlation analysis suggested a common effect on overall accuracy of performance but isolated only an auditory factor for a laterality index.
CONCLUSIONS:
The variables used in this experiment affected performances in the auditory modality to a greater extent than in the visual modality. The right-ear advantage observed in the dichotic-digits task was most evident when reproduction mediated response selection was used in conjunction with three-digit pairs. This effect implies that factors such as &quot;speech related output mechanisms&quot; and digit-span length (working memory) contribute to laterality effects in dichotic listening performance with traditional paradigms. Thus, the use of multiple-digit pairs to avoid ceiling effects and the application of verbal reproduction as a means of response selection may accentuate the role of nonperceptual factors in performance. Ideally, tests of perceptual abilities should be relatively free of such effects.</style></abstract><issue><style face="normal" font="default" size="100%">6</style></issue></record></records></xml>