2018 Seminars

January 2018

HRC Seminar with Luke Baltzell January 19th

Luke Baltzell, Department of Cognitive Science, University of California, Irvine.

Title: The role of cortical entrainment in speech perception: some considerations

Abstract: It has been suggested that the cortical entrainment response reflects phase-resetting neuronal oscillations that track speech information. However, it remains unclear the extent to which the entrainment response reflects acoustic rather than linguistic features of the speech stimulus. Furthermore, the neural representation of the speech stimulus being tracked remains unknown. We present evidence that the entrainment response tracks acoustic rather than linguistic information, and that the entrainment response tracks acoustic information within peripheral auditory channels.

HRC Seminar with Nace Golding January 26th

Nace Golding, University of Texas at Austin

Title: Beyond Jeffress: New Insights into the Sound Localization Circuitry in the Medial Superior Olive

Abstract: The circuitry in the medial superior olive (MSO) of mammals extracts azimuthal information from the inter-aural time differences (ITDs) of sounds to the two ears. For the past 70 years models of sound localization have assumed that MSO neurons represent a single population of cells with homogeneous properties. Here I will discuss new data that show that MSO neurons are in fact physiologically diverse, with properties that depend on cell position along the topographic map of frequency. In many neurons, high frequency firing is promoted via fast subthreshold membrane oscillations. We propose that differences in these and other physiological properties across the MSO neuron population enables the MSO to duplex the encoding of ITD information in fast, sub-millisecond time varying signals as well as slower envelopes.

 

February 2018

HRC Seminar with Matt McGinley February 2nd

Matt McGinley, Baylor College of Medicine

Title: Pupil-indexed neuromodulation of brain state and cognition

Abstract: Moment-to-moment changes in the state of the brain powerfully influence cognitive processes such as perception and decision-making. For example, during a research seminar we may attend closely to the speaker, drift nearly to sleep, and then arouse rapidly and flee the room following a fire alarm. Failure to notice the same alarm during deep sleep could have tragic consequences. The McGinley lab seeks to understand how these shifts in internal brain states – such as arousal and attention – shape our perception and actions. Brain state is powerfully controlled by release throughout the brain of neuromodulatory transmitters such as acetylcholine and norepinephrine. In addition to controlling brain state, these modulatory systems exert temporally precise control of the cerebral cortex to guide effective learning and decision-making. Our research aims to understand the natural cellular-, synaptic-, and circuit-level physiologic mechanisms by which neuromodulation of the cortex shapes cognition. We use the pupil as a proxy for neuromodulatory brain state. We train mice in psychometric value-based decision-making tasks. To dissect these brain circuits, we conduct two-photon imaging, optogenetics, whole-cell recording, extracellular recording, and pharmacology—all during behavior. We also seek to develop closed-loop electrical interventions to treat related disorders, using novel biosensors and brain stimulation devices.

 

March 2018

HRC Seminar with Yue Sun – Cancelled March 23rd

Yue Sun – Max-Plank Institute

HRC Seminar with Amanda Griffin March 30th

Amanda Griffin, Boston Children’s Hospital at Waltham

Title: Effects of Pediatric Unilateral Hearing Loss on Speech Recognition, Auditory Comprehension, and Quality of Life

Abstract: A growing body of research is challenging long-held assumptions that pediatric unilateral hearing loss (UHL) has minimal detrimental effects on children’s development. It is now well understood that children with UHL are at risk for speech and language delays, psychosocial issues, and academic underachievement. Despite this recognition, audiological service provision in this population has suffered from insufficient evidence of objective benefit from the variety of interventions that are available. Relatively few studies have expressly focused on understanding the variability in auditory abilities within this special population, which is imperative to inform intervention strategies. The current talk will briefly review the existing literature on global outcomes and then focus on newer auditory research exploring the effects of UHL on masked sentence recognition in a variety of target/masker spatial configurations, auditory comprehension in quiet and in noise, and hearing-related quality of life in school-aged children.

 

April 2018

HRC Seminar with Lauren Calandruccio April 6th

Lauren Calandruccio, Case Western Reserve University

Title: Speech-on-speech masking: Properties of the masker speech that change its effectiveness

Abstract: In this lecture, I will present two data sets evaluating sentence recognition in the presence of competing speech maskers. The importance of who is talking in the background and what they are saying will be evaluated. In the first set of experiments we will assess if one of the two talkers within the masker speech dominates the masker’s overall effectiveness. In the second set of experiments, we will explore whether the semantic meaning of the masker speech matters when controlling for syntax, lexical content, and the talker’s voice.

HRC Seminar with Matthew Masapollo April 13th

Matthew Masapollo, Boston University

Title: Speech Perception in Adults and Infants: Some Universal Characteristics and Constraints

Abstract: A fundamental issue in the field of speech perception is how perceivers map the input speech signal onto the phonetic categories of their native language. Over the years, considerable research has focused on addressing how the nature of the mapping between acoustic and phonetic structures changes with linguistic experience over the course of development. This emphasis on exploring what is language-specific as opposed to what is universal in the speech categorization process derived in part from research with adults, infants and non-human primates on the well- studied phenomenon called the “perceptual magnet effect” (Kuhl, 1991), which revealed that early linguistic experience functionally alters perception by decreasing discrimination sensitivity near native phonetic category prototypes and increasing sensitivity near boundaries between categories. However, there is now growing evidence that young infants reared in different linguistic communities initially display universal perceptual biases that guide and constrain how they learn to parse phonetic space, and that these biases continue to operate in adult language users independently of language-specific prototype categorization processes. Recent findings on this issue, which are summarized in this talk, suggest that the categorization processes that map the speech signal onto categorical phonetic representations are shaped by a complex interplay between initial, universal biases and experiential influences.

HRC Seminar with Alexandra Jesse April 20th

Alexandra Jesse, University of Massachusetts at Amherst

Title: Learning about speaker idiosyncrasies in audiovisual speech

Abstract: Seeing a speaker typically improves speech perception, especially in adverse conditions. Audiovisual speech is more robustly recognized than auditory speech, since visual speech assists recognition by contributing information that is redundant and complementary to the information obtained from auditory speech. The realization of phonemes varies, however, across speakers, and listeners are sensitive to this variation in both auditory and visual speech during speech recognition. But listeners are also sensitive to consistency in articulation within a speaker. When an idiosyncratic articulation renders a sound ambiguous, listeners use available disambiguating information, such as lexical knowledge or visual speech information, to adjust the boundaries of their auditory phonetic categories to incorporate the speech sound into the intended category. This facilitates future recognition of the sound. For visual speech to best aid recognition, listeners likewise have to flexibly adjust their visual phonetic categories to speakers. In this talk, I will present work showing how lexical knowledge and speech information can both assist the retuning of phonetic categories to speakers, and how these processes seem to rely on attentional resources. Furthermore, I will present work showing that listeners rapidly form identity representations of unfamiliar speakers’ facial motion signatures, which subserve talker recognition but may also aid speech perception.

HRC Seminar with Bharath Chandrasekaran April 27th

Bharath Chandrasekaran, University of Texas at Austin

Title: Cognitive-sensory influences on the subcortical representation of speech signals

Abstract: Scalp-recorded electrophysiological responses to complex, periodic auditory signals reflect phase-locked activity from neural ensembles within the subcortical auditory system. These responses, referred to as frequency-following responses (FFRs), have been widely utilized to index typical and atypical representation of speech signals in the auditory system. In this talk, I will discuss two studies from my lab that evaluated cognitive-sensory interactions in the subcortical representation of speech features. In one study (Xie et al. in revision); we used novel machine learning metrics to demonstrate the influence of cross-modal attention on the neural encoding of speech signals. We found that the relationship between visual attentional and auditory subcortical processing is highly contingent on the predictability of incoming auditory streams. When attention is disengaged from the auditory system to process visual signals, subcortical auditory representation is enhanced when stimulus presentation is less predictable. We posit that, when attentional resources are allocated to the visual domain, a reduction in top-down auditory cortical control gears the subcortical auditory system towards novelty detection. In a second study (Reetzke et al. submitted), we examined the impact of long-term sound-to-category training on the subcortical representation of speech signals. We trained English-speaking adults on a non-native contrast (Mandarin tones) using a sound-to-category training task for > 4,400 trials over ~17 consecutive days. Each subject was monitored from novice to an experienced stage of performance, which was defined as maintenance of target criterion (90%) for three consecutive days, a criterion defined by native Mandarin performance. Subjects were then over-trained for ten additional days to stabilize and automatize behavior. To assay neural plasticity, we recorded FFRs to the four Mandarin tones at various learning stages. Our results show that English-speaking adults can become as accurate and fast at categorizing non-native Mandarin speech sounds as native Chinese adults. Learners were also able to generalize to novel stimuli and demonstrate categorical perception to a tone continuum equivalent to native speakers. Notably, robust changes in neurophysiological responses to Mandarin tones emerge after the behavior is stabilized, and such observed neural plasticity, along with behavior, is retained after 2-months of no training. I will discuss results from these two studies within the context of the predictive tuning model of auditory plasticity (Chandrasekaran et al. 2014).

 

September 2018

HRC Seminar with Michaela Warnecke September 14th

Michaela Warnecke, University of Wisconsin-Madison

Title: Behavioral adaptations to changes in the acoustic scene of the echolocating bat

Abstract: Our natural environment is noisy and in order to navigate it successfully, we must filter out the important components that may guide our next steps. For humans, a common challenge in analyzing the acoustic scene is the segregation of speech communication sounds from background noise. This process is not unique to humans: Echolocating bats emit high frequency biosonar signals and listen to echoes returning off objects in their environment. The acoustic input they receive is a complex sound containing echoes reflecting off target prey and other scattered objects, conspecific calls and echoes, and any naturally-occurring environmental noises. The bat is thus faced with the challenge of segregating this complex sound wave into the components of interest to adapt its flight and echolocation behavior in response to fast and dynamic environmental changes. In this talk, I will discuss two approaches to investigate the mechanisms that may aid the bat in analyzing its acoustic scene. First, I will discuss how bats adapt their behavior in open spaces and cluttered environments. More specifically, I will outline how temporal patterning of echolocation calls is affected during competitive foraging of paired bats. The results of these experiments show that “silent behavior”, the ceasing of emitting echolocation calls, which had previously been proposed as a mechanism to avoid acoustic interference, or to “eavesdrop” on another bat, may not be as common as previously reported. Second, I will outline the bat’s adaptations to changes of controlled echo-acoustic flow patterns, similar to those it may encounter when flying along forest edges and among clutter. The findings of these studies show that big brown bats adapt their flight paths in response to the intervals between echoes, and suggest that there is a limit to how close objects can be spaced before the bat does not represent them as distinct any longer.

HRC Seminar with Matt Goupell September 21st

Matt Goupell, University of Maryland, College Park

Title: Spatial hearing with interaural level differences in cochlear-implant users

Abstract: Over the past four decades, multi-channel cochlear implants (CIs) or bionic auditory prostheses, have been provided to severe-to-profoundly hearing-impaired individuals with the primary goal of partially restore speech understanding. There has been great success in achieving this goal – at least in quiet. More recently, CIs have been given to people with the goal to also provide access to sound in both ears, thus potentially improving spatial hearing. There remains, however, much room for improvement in sound localization abilities and speech understanding in noise in CI users. A major reason that this happens is that normal-hearing (NH) humans primarily use low-frequency (<1500 Hz) interaural time differences (ITDs) for spatial hearing; the current generation of CIs do not convey low-frequency ITDs and CI users are forced to use high-frequency interaural level differences (ILDs) for spatial hearing. Unfortunately, ILDs produced by the head are complicated functions, and it is unknown how the brain processes ILDs in the absence of ITD information. Furthermore, ILDs are distorted at multiple stages of electric sound encoding, further obscuring understanding of the encoding of spatial information with CIs. Therefore, recent work from our lab has focused on understanding ILD processing in CI and NH listeners. The goal is deeper understanding of the limitations of ILD processing for CI users and to determine if good spatial hearing can be achieved with ILDs alone, or if low-frequency ITD encoding is necessary.

HRC Seminar with Peter Weber September 28th

Peter Weber, Boston Medical Center

Title: Implantable Hearing Devices: What We Know, What We Need to Know, What Could HRC Do?

Abstract: Various types of implantable hearing devices are now standard of care for many patients with defined indications. Cochlear implants started out as treatment for deaf adults and today are used for patients with significant residual hearing and in children as young as 6 months. However, issues with bilateral, bimodal, unilateral hearing still exist. How best to program for music appreciation. Why do some individuals struggle and others are stars? How can we eliminate background noise? How important is sound localization? Tinnitus suppression? These same questions and concerns apply to osseointegrated implants for conductive hearing loss and single sided deafness. However, now we must consider which is better for single sided deafness; CROS hearing aid, Baha, or cochlear implant. Where does the Envoy implant fit it? What about better, more natural alternatives? Hearing regeneration drugs or viral vectors to “fix” abnormal DNA sequences. What role can HRC play in these questions? How can HRC enhance hearing for patients? What questions are most intriguing to your research interests? How can we COLLABORATE?
 

October 2018

HRC Seminar with Mark Parker October 12th

Mark Parker, Steward St. Elizabeth’s Medical Center and Tufts University School of Medicine

Title: Anatomical and Therapeutic Correlates of Hearing-in-Noise

Abstract: Hearing in noise (HIN) is a primary complaint of both the hearing impaired and the hearing aid user. Both auditory nerve (AN) function and outer hair cell (OHC) function contribute to HIN, but their relative contributions are still being elucidated. Due to their electromotility function, OHCs play a critical role in HIN by fine tuning the response of the basilar membrane. Further, animal studies suggest that auditory synaptopathy, the loss of synaptic contact between hair cells and the AN, may be another cause of HIN difficulty. The primary question we are trying to answer is whether HIN performance is primarily dependent on OHC function, AN function, or both? Secondarily, we ask whether hearing aids (HAs) effectively compensate for these otopathologies in persons with difficulty HIN While there is strong evidence that auditory synaptopathy occurs in animal models, there is debate as to whether auditory synaptopathy is clinically significant in humans, likely because of disparate methods of measuring noise exposure in humans and our high variability in susceptibility to hearing impairment. Rather than use self-reported noise exposure, another option is to assume that the general population exhibits a range of noise exposures and resulting otopathologies and define auditory synaptopathy operationally as low auditory compound action potential (CAP) amplitude (<2 s.d. below the mean) accompanied by normal OHC amplitudes (within +/- 1 s.d. of the mean) in persons with PTA < 12.5 dB HL. Applying this operational-definition of synaptopathy to our clinical database of over 280 adult subjects provides evidence that auditory synaptopathy is not only present in persons with hearing thresholds within normal limits (WNL), but at in an incidence as much as 45%, may be a relatively common occurrence. The data further demonstrate that persons with hearing WNL may exhibit HIN difficulties, persons with hearing WNL may exhibit two distinct types of undetected otopathologies (auditory neuropathy and/or OHC dysfunction), and HIN performance is primarily governed by OHC, rather than AN, function. The data also demonstrate that hearing aids (HAs) provide benefit in HIN performance to persons with OHC dysfunction regardless of whether their thresholds are in the normal or abnormal ranges of hearing sensitivity, which suggests that HAs compensate for OHC dysfunction even in persons with hearing within the normal range.

HRC Seminar with Nobert Kopco October 26th

Nobert Kopco, Safarik University

Title: N/A

Abstract: N/A

 

November 2018

HRC Seminar with Yi Shen November 9th

Yi Shen, Indiana University

Title: Psychometrics, Machine-Learning, and Clinical Assessments

Abstract: Maximizing limited clinical time to gain as much information as possible on a patient’s auditory profile is essential for behavioral clinical tests in audiology. With modern Bayesian adaptive estimation techniques, significant improvement in data collection time can be gained while maintaining high reliability. In some cases, the amount of time saving could be an order of magnitude, reducing the testing time from hours to minutes. In this presentation, I will report a series of Bayesian adaptive tests recently developed and validated in my laboratory. These tests went beyond hearing threshold and focused on suprathreshold capabilities, including auditory spectral and temporal resolution, equal-loudness level contour, temporal masking release for speech recognition, and spectral relative weights for speech perception. These early efforts have promised a new era in behavioral hearing assessments, in which sophisticated auditory models and signal-processing algorithms can be fitted to individual patients within manageable amount of time.

HRC Seminar with Xioaqin Wang November 16th

Xiaoqin Wang, Johns Hopkins University

Title: Harmonic Organization of Mammalian Auditory Cortex

Abstract: A fundamental structure of sounds encountered in the natural environment is the harmonicity. Harmonicity is an essential component of music found in all cultures. It is also a unique feature of vocal communication sounds such as human speech and animal vocalizations. Harmonics are produced by a variety of acoustic generators and reflectors in natural environments, including vocal apparatuses of humans and animal species as well as music instruments of many types. Given the widespread existence of the harmonicity in many aspects of our hearing environment, it is natural to expect that it be reflected in the evolution and development of the auditory systems of both humans and animals, in particular the auditory cortex. Recent neuroimaging and neurophysiology experiments have identified regions of non-primary auditory cortex in humans and non- human primates that exhibit selective responses to harmonic pitches. Accumulating evidence has also shown that neurons in many regions of the mammalian auditory cortex exhibit characteristic responses to harmonically related frequencies beyond the range of pitch. Taken together, these findings suggest that a fundamental organizational principle of the mammalian auditory cortex is based on harmonicity. Such an organization can play an important role in speech and music processing by the brain. It may also form the basis of the preference for particular classes of music and voice sounds.

 

December 2018

HRC Seminar with Peter Cariani December 7th

Peter Cariani, Boston University

Title: Temporal codes and musical tonality

Abstract: For over 150 years now there has been a running debate about the nature of auditory representations involved in music and speech, specifically whether these are mediated primarily by frequency domain (cochleotopic,tonotopic) patterns of neural excitation, time domain spike timing information,or some combination of both. We will discuss the nature of musical tonality from the vantage point of neural coding the auditory nerve — the first stage of auditory neural processing. When we examine neural responses the auditory nerve, it is readily apparent that both place and time principles play a role, but that the high acuity and stability of auditory percepts such as the perception of musical pitch is due to patterns of spike timing (interspike intervals). The low pitch of harmonic complex tones at their fundamental (F0) is correlated with the all-order interspike interval that is present in the whole population of (50k) auditory nerve fibers at a given time. This population-wide temporal pattern representation resembles the (half-wave rectified) autocorrelation function of the stimulus that contains the same information as its power spectrum. The temporal representation only exists up to the limits of significant phase-locking (4-5 kHz), which may explain the existence region of musical tonality (octave similarity, musical intervals/chroma relations, and robust recognition of transposed melodies). Such representations information not only about harmonics but also about subharmonics, and therefore support theories of musical consonance based on harmonicity (alongside roughness), and theories of harmony based on relative pitch stabilities created by mutually reinforcing or competing sets of subharmonics (fundamental bass).