March 2013
Kelly L. Tremblay, PhD, CCC-A, FAAA, and Jessica Sullivan, PhD
At the core of almost every rehabilitation program for people with hearing loss is the use of amplification. The goal of hearing aid amplification is to improve a person's access to sound. Depending on the degree and configuration of the individual's hearing loss, the hearing aid is tasked with increasing sound levels at different frequency regions to ensure that incoming speech frequencies are reaching the ear. However, signal detection at the level of the ear does not guarantee perception. A perceptual event is dependent on not only the integrity of the signal at the level of the ear, but also how that sound is coded and integrated using multiple networks in the brain. This ear-brain system starts with sound leaving the hearing aid and then entering the peripheral (ear) central (brain) systems. The extent to which the integrity of the incoming sound is preserved at all stages—the ear, brainstem, and cortex—is assumed to contribute to the resultant perceptual event. Decades of research both in animals and in humans have shown that the acoustic components of a signal (level and timing) are biologically represented through various place and timing codes. Auditory deprivation (hearing loss) alters those original biological codes, as does the process of aging: maturation through senescence. It follows then that introducing sound, following periods of auditory deprivation (hearing loss), also sets in place a series of events that code these new audible signals provided by a hearing aid. And it is for this reason that the brain can be considered an essential component to rehabilitation. Yet, little is known about how the brain processes amplified sound or how it contributes to perception and the successful use of hearing aid amplification.
To advance our understanding of the brain's involvement during rehabilitation through amplification, a "Special Issue " of the International Journal of Otolaryngology was assembled. The motivation for studying the brain-hearing aid relationship can be stated in two ways: (1) finding a way to verify that the amplified signal is reaching the brain and (2) understanding how the brain makes use of amplified sound. Ultimately, the goal of this direction of research is to identify converging evidence that will advance clinical practice as well as the patient's experience with hearing aids.
Electrophysiology has been used for decades to quantify the neural detection of sound, for the purpose of estimating aided thresholds (Kiessling, 1982; Gravel, Kurtzberg, Stapells, Vaughan, & Wallace, 1989). However, numerous studies showed that the hearing aid introduces new variables that interfere with auditory brainstem response (ABR) recordings. Ultimately, confounding variables such as stimulus rate and compression characteristics of the hearing aid (Gorga, Beauchaine, & Reiland, 1987) eventually prevented the acquisition of aided ABR responses using typical click and tonal stimuli. However, in the special issue, Anderson and Kraus (2013) describe why they think the complex speech evoked ABR (c-ABR) might be a promising new approach to recording aided evoked potentials for clinicians. In their article they show how hearing aid amplification results in measurable changes in this frequency following response when recorded in a single subject. What is not yet known is if compression features, attack time, and other hearing aid characteristics contribute to the brainstem response and if they will turn out to be confounds or opportunities for clinicians. Furthermore, scientists will need to determine the utility of measure in people with different types of hearing loss. If found to be sensitive, brainstem measures could be used to confirm the neural detection of amplified sound.
Cortical auditory evoked potentials (CAEPs) have also been tried over the years, and a commercial device is being marketed for the purpose of estimating aided thresholds in infants (NAL, 2013). However, in the special issue, a number of articles from different research laboratories provide sufficient reason and data to raise caution. Articles by Billings, Papesh, Penman, Baltzell, and Gallun (2012) and Jenstad, Marynewich, and Stapells (2012) showed that responses to aided stimuli are driven both by the level and the signal-to-noise ratio with which the hearing aid transduces the signal. This means that evoked cortical activity likely reflects interactions between hearing aid signal processing and the stimulus used to evoke the response in a way that cannot always be defined or controlled. Similarly, Easwar, Purcell, and Scollie (2012) showed large differences in hearing aid output arising due to stimulus context that may also influence the interpretation of audibility based on aided CAEPs. Together, these articles reinforce the fact that interpretation of a cortical evoked response, like the P1-N1-P2, is complex and that we do not yet fully understand what contributes to the evoked brain activity when a hearing aid is involved.
It is often assumed that the benefit of hearing aids is not primarily reflected in better speech performance, but that it is reflected in less effortful listening in the aided than in the unaided condition. In other words, it is not only important that the sound be detected by the brain, but also that the brain be able to make use of the sound. In the special issue, Koelewijn, Zekveld, Festen, Rönnberg, and Kramer (2012) and colleagues examined how processing load, while listening to masked speech, relates to inter-individual differences in cognitive abilities relevant for language processing. Pupil dilation was measured in 32 normal hearing participants while participants listened to sentences masked by fluctuating noise or interfering speech at 50% and 84% intelligibility. Additionally, working memory capacity, inhibition of irrelevant information, and written text reception were tested. The authors showed that the physiological pupil responses were larger during interfering speech as compared to fluctuating noise. Regression analysis revealed that high working memory capacity, better inhibition, and better text reception were related to better speech reception thresholds. These results tell us that, even in normal hearing listeners, when the listening condition is complex and difficult, a person's ability to perceive speech is in part related to his or her cognitive abilities.
Another way to improve the ability to perceive amplified speech in the presence of competing noise is to engage cognitive processes through active listening exercises. Auditory training exercises can be an effective method for improving the perceptual skills of adults with and without hearing loss (Tremblay & Moore, 2012); however, there is no evidence on the effectiveness of auditory training in noise for pediatric hearing aid users. Recent work by Sullivan, Tibodeaux, and Assmann (2013) is filling that void by demonstrating remarkable perceptual gains in children with moderate-to-severe sensorineural hearing loss. In their study, children wore their own personal hearing aids while participating in 7 hours of word training in either continuous or interrupted noise during which they identified the keyword(s) via the computer over a 3-week period. The children who trained in the interrupted noise condition demonstrated greater immediate benefits in speech recognition in noise compared to those who trained in continuous noise. Children who participated in auditory training in interrupted noise demonstrated significantly better signal-to noise ratio (SNR) threshold changes than the auditory training in continuous noise group or a control group. As an example, listening to speech in the presence of interrupted noise improved from an average SNR 9.46 dB to 3.78 dB in interrupted noise (5.69-dB SNR, difference score). The overall gain of 5.69 dB SNR is quite remarkable given that every 1-dB of change represents 8.9% improvement in speech intelligibility on the Hearing in Noise Test (HINT) (Nilsson, Soli, and Sullivan, 1994). This means the 5.69-dB gain represents a 44.5% improvement in speech intelligibility in noise. These results show the impressive capacity that children have to improve their listening abilities by participating in auditory training exercises. Explanations for such improvements could lay in the brain's ability to recover from the masking effect of the noise by allowing synchronous firing during the intermittent periods of silence. Billings, Bennett, Molis, and Leek (2011) have provided evidence to support this explanation. They showed that onset responses, reflected by the P1-N1-P2, are more prominent when recorded in the presence of interrupted when compared to continuous noise types. Children may have learned to listen and make use of information, contained in the gaps, a process sometimes described as "glimpsing." Working memory, defined as the ability to encode, store, and manipulate information concurrently with other mental processing activities, might also have been affected by training.
In summary, there is a growing body of literature to suggest that the neural detection and use of sound are important components to perception. While there is still much to learn about how to quantify the neural detection of amplified signals, there is emerging physiological evidence to show that: (1) trying to listen in difficult environments requires cognitive effort that can be quantified physiologically and (2) perceptual processes (and thus underlying neural mechanisms) are capable of change.
Kelly Tremblay is a professor and clinician at the University of Washington in Seattle. She and her research team are interested in auditory rehabilitation and study experience-related changes in the brain. Their program of research includes the effects of auditory deprivation (age-related hearing loss) and stimulation (hearing aids, cochlear implantation, and auditory training) on the brain. Contact her at tremblay@uw.edu.
Jessica Sullivan is an assistant professor in the speech and hearing sciences department at the University of Washington. In the pediatric aural habilitation lab, the focus is studying the relationship between working memory and speech perception in noise and improving speech recognition in noise through auditory training. Contact her at sulli10@u.washington.edu.
Anderson, S., & Kraus N. (2013). The potential role of the cABR in assessment and management of hearing impairment. International Journal of Otolaryngology, 2013, 1–10. doi:10.1155/2013/604729.
Billings, C. J., Bennett, K. O., Molis, M. R. & Leek, M. R. (2011). Cortical encoding of signals in noise: effects of stimulus type and recording paradigm. Ear & Hearing, 32(1), 53–60.
Billings, C. J., Papesh, M. A., Penman, T. M., Baltzell, L. S., & Gallun, F. J. (2012). Clinical use of aided cortical auditory evoked potentials as a measure of physiological detection or physiological discrimination. International Journal of Otolaryngology, 2012, 1–14. doi:10.1155/2012/365752
Easwar, V., Purcell, D. W., & Scollie, S. D. (2012). Electroacoustic comparison of hearing aid output of phonemes in running speech versus isolation: Implications for aided cortical auditory evoked potentials testing. International Journal of Otolaryngology. doi:10.1155/2012/518202.
Gorga, M. P., Beauchaine, K. A., & Reiland, J. K. (1987). Comparison of onset and steady-state responses of hearing aids: implications for use of the auditory brainstem response in the selection of hearing aids. Journal of Speech & Hearing Research, 30(1), 130–136.
Gravel, J. S., Kurtzberg, D., Stapells, D. R., Vaughan, H. G., & Wallace, I. F. (1989). Case studies. Seminars in Hearing, 10(3), 272–287.
Jenstad, L. M., Marynewich, S., & Stapells, D. R. (2012). Slow cortical potentials and amplification-Part II: Acoustic measures. International Journal of Otolaryngology, 2012, 1–14. doi:10.1155/2012/386542.
Kiessling, J. (1982). Hearing aid selection by brainstem audiometry. Scandinavian Audiology, 11, 269–275.
Koelewijn, T., Zekveld, A. A., Festen, J. M., Rönnberg, J., & Kramer, S. E. (2012) Processing load induced by informational masking is related to linguistic abilities. International Journal of Otolaryngology, 2012, 1–11. doi:10.1155/2012/865731.
National Acoustics Laboratories (NAL). The HEARLab® system. Retrieved March 6, 2013, from http://hearlab.nal.gov.au.
Nilsson, M., Soli, S. D., & Sullivan, J. A. (1994). Development of the Hearing in Noise Test for the measurement of speech reception thresholds in quiet and in noise. Journal of the Acoustical Society of America, 95(2), 1085–1099.
Sullivan, J. R., Thibodeau, L. M. & Assman, P. F. (2013). Auditory training of speech recognition with interrupted and continuous noise maskers by children with hearing impairment. Journal of the Acoustical Society of America, 133(1), 495–501.
Tremblay, K. & Moore, D. (2012). Current issues in auditory plasticity and auditory training. In K. E. Tremblay & R. F. Burkard (Eds.), Translational perspectives in auditory neuroscience (pp. 165–189). San Diego, CA: Plural Publishing.