2015 Seminars

February 2015

HRC Seminar with Dr. Priotr Majdak February 27th

Piotr Majdak, Ph.D., Acoustic Research Institute, Austrian Academy of Sciences

Title: Modelling sound localization beyond the horizontal plane

Abstract: IMonaural spectral cues are assumed to be most important cues for sagittal-plane sound localization. They allow to estimate the source elevation within a hemifield and to discriminate between front and back. Spectral cues result from the direction-dependent filtering of broadband sounds by the torso, head, and ear and can be described by the head-related transfer functions (HRTFs). While the encoding of sound directions by HRTFs is an acoustic process, the decoding of those cues involves several auditory processing stages. When modelling the sagittal-plane sound localization, the dorsal cochlear nucleus seems to play an essential role. In the talk, the contribution of the DCN in the model will be analyzed and listener-specific factors influencing localization performance in sagittal planes will be discussed.
 

April 2015

Cancelled – HRC Seminar with Yoojin Chung April 3rd

Yoojin Chung, Eaton Peabody Lab

Title: Sensitivity to Interaural Time Differences in the Inferior Colliculus of an Awake Rabbit Model of Bilateral Cochlear Implants

HRC Seminar with Gin Best April 10th

Gin Best, Ph.D., Sargent College, Boston University

Title: Audibility and spatial release from masking

HRC Seminar with Howard Gritton April 17th

Howard Gritton, Boston University

Title: Auditory processing and cross-cortical communication: how attention changes what we hear

Abstract: Cortical processing is fundamental to how we understand information that originates in our environment. Signals that may be “heard” in auditory cortex must be passed on to executive regions important for decision making and action selection. I will present recent evidence from passive and active listening experiments in mice that suggest that cholinergic neurotransmission, a critical component of attention, plays a prominent role in how signals are relayed between auditory cortex and prefrontal areas. We have also found that experience and new learning changes the nature of this communication and that under challenging conditions auditory discrimination may require cholinergic input.

HRC Seminar with Edward Large April 24th

Edward Large, University of Connecticut

Title: Musical Neurodynamics: A Neural Resonance Approach to Musical Meaning

Abstract: Music is found among all human cultures, and musical ‘languages’ vary across cultures with learning. Unlike language, however, music rarely refers to the external world. It consists of self contained patterns of sound, and certain aspects of these patterns are found universally among musical cultures. In this talk, I put forward the hypothesis that universal aspects of music perception can be described completely in terms of resonance. The brain/body resonates to sound on multiple time scales and we feel aspects of this resonant responding as musical qualia. I discuss two specific examples: the experience of tonal expectancy (What next?) and the experience of temporal expectancy (When next?). Using a combination of neurodynamic models and experimental evidence, I show that this theory is consistent with readily observable brain dynamics, and that it can explain puzzling perceptual and behavioral results in both tonal and temporal expectancy.
 

May 2015

HRC Seminar with Laurel Carney May 8th

Laurel Carney, University of Rochester Medical Center

Title: Rethinking Peripheral Auditory Responses: The Neural Fluctuations that Drive the CNS

Abstract: Studies of the neural coding of complex sounds tend to focus on discharge rates, phase-locking to fine-structure, and how these quantities change along the auditory pathway. However, many central neurons are strongly driven by relatively low-frequency fluctuations of their inputs. This sensitivity to fluctuations is reflected in modulation transfer functions, measured using sinusoidally modulated signals. However, rate fluctuations occur in the periphery in response to all complex sounds. These fluctuations are influenced by peripheral tuning, which limits the bandwidth and thus the modulation spectrum of tuned responses. In addition, nonlinearities associated with sensory transduction introduce interesting profiles in the rate fluctuations that are not obvious from the stimulus waveform or spectrum. We are exploring the profiles of rate fluctuations across the auditory-nerve population for a number of complex sounds, including stimuli used in classical psychophysical tasks. Results based on computational models and physiological recordings in the midbrain will be presented.

HRC Seminar with Inyong Choi May 15th

Inyong Choi, Boston University

Title: Behavioral and neural costs of broadened attention in a dynamic auditory scene.

Abstract: To communicate effectively in social settings, we focus selective attention on one speaker while simultaneously monitoring for novel voices that may unexpectedly arise from other locations. We are investigating the behavioral and neural consequences of “broadened” selective attention in social settings. We hypothesize that when a specific location is the focus of attention, sensory inputs from other locations are strongly inhibited, but that when listeners anticipate having to reorient to unexpected events, i.e. another talker, inhibition is weaker. This attenuation of inhibition results in the degradation of the individual’s ability to attend to the original target. Results from simultaneous behavioral and neuroimaging (M/EEG) experiments on human listeners will be presented.
 

September 2015

HRC Seminar with Goldie Mehraei September 11th

Goldie Mehraei, MIT/Harvard, Shinn-Cunningham Lab

Title: Revealing auditory nerve fiber loss in humans using auditory brainstem response wave-V latency in noise

Abstract: Recent animal studies show that noise-induced loss of auditory nerve fibers (ANFs) reduces auditory brainstem response wave-I amplitudes (ABRs) without affecting hearing thresholds. Although noise-induced neuropathy affects how ABR wave-I amplitude grows with level, ABR latencies have not been thoroughly investigated. Models suggest that how ABR wave-V latency changes with increasing background noise or due to a preceding masker should be a sensitive measure of ANF survival. We tested these predictions in a series of experiments and found evidence that individual differences in ABR wave-V latency in listeners with normal hearing threshold reflect differences in ANFs. Specifically, we find that wave-V latency rate of change with noise level correlates with the ability to use fine temporal cues: listeners with poor sensitivity to envelope interaural time differences showed smaller changes in wave-V latency with increasing noise. In addition, ABR wave-I amplitude growth with stimulus level was a significant predictor of wave-V latency rate of change with noise level. We also analyzed results from noise-exposed mice and found analogous patterns. In forward masking, listeners with a delayed wave-V latency exhibited higher forward masking behavioral thresholds. Furthermore, listeners with the poorest behavioral thresholds showed evidence of faster recovery from forward masking.

HRC Seminar with Dan Polley September 25th

Dan Polley, MIT/Harvard, Eaton-Peabody Lab, MEEI

Title: “Descending Control in the Central Auditory Pathway”

Abstract: Anatomical studies have described a massive and specific network of corticofugal projections that originate in deep layers of the cerebral cortex and innervate nearly every level of the central nervous system. Auditory corticofugal projections arise from glutamatergic neurons in layers (L) 5 and 6 of the auditory cortex that densely innervate the medial geniculate body of the thalamus and inferior colliculus (IC). L5 and L6 projections differ in nearly every respect – from axon morphology and synaptic physiology to inter-areal targeting within the thalamus and IC. Cortical feedback has been implicated in a diverse set of cognitive functions ranging from memory consolidation to dynamic sensory filtering, yet a detailed understanding of its functions and mechanisms has remained elusive due to the technical difficulties associated with manipulating specific corticofugal circuits. We have taken advantage of modern techniques to selectively activate L6 corticothalamic or L5 corticollicular neurons to study real time modulation of subcortical sensory processing in awake mice. We find that L6 corticothalamic neurons can bi-directionally change the gain on cortical sensory tuning via local inhibitory circuits and indirectly via a cortico-thalamo-cortical loop. Our findings suggest a mechanism whereby columnar processing and sound perception can be alternately biased towards detection or discrimination of sound features depending on the relative timing between sound and CT neural activity. In another series of studies, we found that L5 corticollicular neurons can enhance sound evoked activity in the external cortex of the IC and broadly tuned regions of the central nucleus, but only when driven with particular activation patterns. I will describe our efforts to identify optimal activation patterns for descending projections systems through closed-loop machine learning algorithms. Collectively, these findings point towards dissociable modulatory effects imposed by L5 and L6 corticofugal networks that strongly shape subcortical auditory processing.
 

October 2015

HRC Seminar with Oded Ghitza October 9th

Oded Ghitza, Research Professor, Biomedical Engineering, Boston University

Title: Neuronal oscillations in parsing continuous speech

Abstract: Driven by the axiom that reliable decoding of speech can only proceed after effective parsing, this study is concerned with the cortical parsing process. The term “parsing” as employed here does not refer to an inference of candidate constituents from the cues in the speech signal — this is carried out by the decoding process — but rather to the function of setting a time-varying, hierarchical window structure synchronized to the input. Oscillation-based models of speech perception suggest a cortical computation principle by which the speech decoding process is guided by a multi-scale parsing process, with a cascade of neuronal oscillators at the core. In the shorter time scale, parsing is into speech fragments that are multi-phone in duration, and it is realized by a theta oscillator capable of tracking the input syllabic rhythm, with the theta cycles aligned with intervocalic speech fragments termed theta-syllables; intelligibility remains high as long as theta is in sync with the input, and it sharply deteriorates once theta is out of sync. In the longer time scale, parsing is into speech fragments that are multi-word in duration, and it is realized by a delta oscillator capable of tracking phrase-level prosodic information, with the delta cycles aligned with chunks; intelligibility remains high as long as delta is in sync with the chunking rate. This talk reviews a model that realizes this cortical computation principle and presents behavioral evidence for its support.

HRC Seminar with Erv Hafter October 23rd

Erv Hafter, UC Berkeley

Title: Sharing attention in psychophysical as well as in natural speech-environments

Abstract: My larger interest in this work has been in the nature of shared attention in doing two things at once and how we perceive when faced with informational overload. More specifically, the discussion will examine one of the classical questions about limited attentional resources, what determines differences between situations showing serial and parallel processing. In this light, the talk will begin with results from a psychophysical dual-task where judgments are of simple stimulus change in a perceptual dual task. These will be compared to data from a simulated cocktail party (w/o booze), where listeners process phonetic and semantic information from speech by multiple talkers, speech that that maintains the kind of cadence and meaning descriptive of listening in a real cocktail party. Full closure on these questions is not promised, but the hope is to convince the seminar that seeming regularities within the two conditions points to a real difference in the way that we share information in multi-tasking.
 

November 2015

HRC Seminar with Tyler Perrachione November 6th

Tyler Perrachione, Boston University

Title: Cognitive consequences of talker variability

Abstract: In this talk, I will present recent studies from our laboratory using behavior, brain imaging, and noninvasive neurostimulation techniques to investigate the effects of phonetic variability in speech on a variety of domains in human communication. These studies explore how linguistic proficiency helps listeners process the phonetic variability relevant to talker identification, how processing phonetic variability may be different in individuals with developmental communication disorders, how intrinsic and extrinsic talker normalization facilitates speech perception, and how phonetic variability both helps and hinders second-language acquisition.

HRC Seminar with Yoojin Chung November 13th

Yoojin Chung, Eaton-Peabody Lab

Title: Neural coding of cochlear implant stimulation in the inferior colliculus of an unanesthetized rabbit model

Abstract: Cochlear implant (CI) listeners show limits at high frequencies in tasks involving temporal processing such as rate pitch and interaural time difference discrimination. Similar limits have been observed in neural responses to electric stimulation in animals with CI; however, the upper limit of temporal coding of electric pulse train stimuli in the inferior colliculus (IC) of anesthetized animals is lower than the perceptual limit. We hypothesize that the upper limit of temporal coding and sensitivity to interaural time differences (ITD) have been underestimated in previous studies due to the confound of anesthesia. To test this hypothesis, we characterized responses of single neurons in the IC for pulse train stimuli in an unanesthetized rabbit model of bilateral CIs. First, we found that IC neurons in awake rabbits exhibit greater sustained responses to high-rate pulse trains, enhanced temporal coding of pulse trains and higher spontaneous activity in the unanesthetized state compared with results from anesthetized preparations. We demonstrated directly that anesthesia is a major factor underlying these differences by monitoring the responses of single units in one rabbit before and after injection of an ultra-short-acting barbiturate. In the second part of the study focused on ITD sensitivity, we found that about 73% of IC neurons were sensitivity ITD in their overall firing rates. On average, ITD sensitivity was best for pulse rates near 80-160 pps and degraded for both lower and higher pulse rates. The degradation in ITD sensitivity at low pulse rates was caused by strong background activity that masked stimulus-driven responses in many neurons. Selecting pulse-locked responses by temporal windowing revealed ITD sensitivity in these neurons. Using temporal windowing at lower pulse rates, and overall firing rate at higher pulse rates, neural thresholds for ITD sensitivity were comparable to perceptual thresholds in the better-performing human bilateral CI users over a wide range of pulse rates.

HRC Seminar with Brian Monson November 20th

Brian Monson, Brigham and Women’s Hospital

Title: The Effect of Abnormal Early Experience on Auditory Cortical Development

Abstract: Premature birth disrupts typical ontogenetic development of the human brain during a period of rapid neurodevelopment, with long-term behavioral consequences. For example, language processing deficits are a hallmark of children born very premature. Abnormal auditory cortical processing associated with premature birth has been reported, but the neural underpinnings are unknown. In this talk I will report on recent efforts to characterize auditory cortical development in vivo in preterm infants using diffusion magnetic resonance imaging techniques. Microstructural differences between preterm and full-term infants are apparent in both gray and white matter in Heschl’s gyrus. Our results suggest that auditory cortical maturation might be particularly susceptible to preterm birth-related disturbances, perhaps due to abnormal auditory experience associated with premature transition from the intrauterine acoustic environment to that of the neonatal intensive care unit.
 

December 2015

HRC Seminar with Kyogu Lee December 11th

Kyogu Lee, Seoul National University

Title: Exploring acoustic markers from speech for diagnosis of depression in the elderly

Abstract: Human voice, which provides ample information about the speaker’s emotion, and by listening to someone’s voice we can easily infer whether he/she is happy, sad, angry, or depressed. In this talk, I will be presenting some preliminary results of our study in collaboration with Seoul National University Bundang Hospital in exploring acoustic features from voice as a diagnostic marker of depression in the elderly. We recorded the voice of 76 euthymic controls (30 men and 46 women) and 56 depressive patients (16 men and 40 women) using a smart phone while he/she was reading 15 standard sentences (5 neutral, 5 positive mood induction, 5 negative mood induction) in the following order; neutral – negative – neutral – positive – neutral. We extracted several acoustic features from the recorded voices, which are group into duration-, frequency-, intensity-, and timbre-related features. Statistical evaluation confirms that a combination of these acoustic features leads to an AUC score of 0.978 in a 5-fold CV experiment.

HRC Seminar with Charlie Liberman December 18th

Charlie Liberman, Eaton-Peabody Lab

Title: Hidden Hearing Loss: Synaptopathy in noise-induced and age-related cochlear damage

Abstract: The classic view of sensorineural hearing loss (SNHL) is that the “primary” targets are hair cells, and that cochlear-nerve loss is “secondary” to hair cell degeneration. Our recent work in mouse and guinea pig has challenged that view. In noise-induced hearing loss, exposures causing only reversible threshold shifts (and no hair cell loss) nevertheless cause permanent loss of >50% of cochlear-nerve / hair-cell synapses. Similarly, in age-related hearing loss, degeneration of cochlear synapses precedes both hair cell loss and threshold elevation. This primary neural degeneration has remained hidden for two reasons: 1) the spiral ganglion cells, the cochlear neural elements commonly assessed in studies of SNHL, survive for years despite loss of synaptic connection with hair cells, and 2) the degeneration is selective for cochlear-nerve fibers with high thresholds. Although not required for threshold detection in quiet (e.g. threshold audiometry or auditory brainstem response threshold), these high-threshold fibers are critical for hearing in noisy environments. Our research suggests that 1) primary neural degeneration is an important contributor to the perceptual handicap in SNHL, and 2) in cases where the hair cells survive, neurotrophin therapies can elicit neurite outgrowth from spiral ganglion neurons and re-establishment of their peripheral synapses.