2022 Seminars

October 2022

HRC Seminar with Sarah Villard         October 14th

Sarah Villard, Boston University

Title: Listening effort during informational masking tasks

Abstract: N/A

HRC Seminar with Bill Hartmann October 21st

Bill Hartmann, Michigan State University

Title: Localization of tones in rooms by moving listeners

Abstract: N/A

HRC Seminar with Matthew Ning November 18th

Matthew Ning, PhD Candidate, Boston University

Title: Decoding spatial location of attended audio-visual stimulus with EEG and fNIRS

Abstract: N/A

December 2022

HRC Seminar with Yoojin Chung December 2nd

Yoojin Chung, Decibel Therapeutics

Title: Development of AAV-based gene therapy for congenital hearing loss

Abstract: N/A

HRC Seminar with Monty Escabi        December 16th

Monty Escabi, University of Connecticut

Title: Encoding and perceiving the texture of sounds: neural codes for recognizing and categorizing auditory texture and for listening in noise

Abstract: Natural soundscapes such as from a forest, a busy restaurant, or a busy intersection are often composed of a cacophony of sounds that the brain needs to interpret either independently or collectively. In certain instances sounds, such as from moving cars, sirens, and people talking, are perceived in unison and are recognized collectively as single sound (e.g., city noise). Yet, in other instances, such as for the cocktail party problem, multiple sounds compete for attention so that competing background noise (e.g., speech babble) interferes with the perception of a single sound source (e.g., a single talker). I will describe results from my lab on the perception and neural representation of auditory textures. Textures, such as a from a babbling brook, restaurant noise, or speech babble are stationary sounds consisting of multiple independent sound sources that can be quantitatively defined by summary statistics of an auditory model (McDermott & Simoncelli 2011). How and where in the auditory system are summary statistics represented and the neural codes that underlies their perception, however, are largely unknown. Using multi-channel neural recordings from the auditory midbrain of unanesthetized rabbits and complementary perceptual studies on human listeners, I will first describe neural and perceptual strategies for encoding and perceiving auditory textures. I will demonstrate how distinct statistics of sounds, including the sound spectrum and high-order statistics related to spectral and temporal modulation cues, contribute to texture perception and are reflected in neural activity. I will then show results from our recent perceptual and complementary neural coding studies on how high-order sound statistics and accompanying neural activity underlie difficulties for recognizing speech in background noise.