2007 Seminars

January 2007

Seminar with Madhu Shashanka January 19th

Auditory Neuroscience Laboratory, Hearing Research Center and Department of Cognitive and Neural Systems, Boston University

“Probabilistic Models for Single-Channel Audio Processing”

Seminar with Mario Ruggiero, Ph.D. January 26th

Professor, Department of Communication Sciences and Disorders, Northwestern University

“The human ear is unexceptional”

 

February 2007

Seminar with Kamal Sen, Ph.D. February 9th

Assistant Professor, Biomedical Engineering, Boston University

“Neural Discrimination of Complex Natural Sounds in Songbirds”

Seminar will be held at the Center for Adaptive Systems, Department of Cognitive and Neural Systems AND Center of Excellence for Learning in Education, Science, and Technology (CELEST)

Seminar with Robert Burkard, Ph.D. February 16th

Professor, Rehabilitation Sciences, Adjunct Professor, Psychology, Associate Professor, Otolaryngology, University at Buffalo, The State University of NY

“Is Auditory Neuropathy/Auditory Dys-synchrony Likely the Result of Selective Inner Hair Cell Loss?”

 

March 2007

Seminar with Antje Ihlefeld March 2nd

Ph.D. Candidate in Cognitive & Neural Systems Auditory Neuroscience Lab, Boston University

“Strategies of spatial listening for speech comprehension”

Dissertation defense

 

April 2007

Seminar with Douglas Vetter, Ph.D. April 6th

Tufts Univ. School of Medicine, Dept. of Neuroscience
Dept. of Biomedical Engineering, Tufts University

“Corticotropin releasing hormone receptors of the inner ear- A pathway to prevention of trauma induced deafness?”

Seminar with Lynne Werner, Ph.D. April 13th

Professor, Department of Speech and Hearing Sciences, University of Washington, Seattle

“How Infants Deal with Uncertainty”

Seminar with David O’Gorman, Ph.D. April 20th

Research Associate, Center for BioDynamics and Hearing Research Center, Departments of Mathematics and Biomedical Engineering, Boston University

“Dynamical mechanisms of neural firing irregularity and modulation sensitivity”

Seminar with John Buck, Ph.D. April 27th

Professor of Electrical and Computer Engineering
University of Massachusetts at Dartmouth

“Cepstral processing models for bat biosonar”

 

May 2007

Seminar with Timothy J. Gardner, Ph.D. May 11th

Postdoctoral Fellow, Fee Lab, Mass. Institute of Technology

Seminar with Cynthia Moss, Ph.D. May 18th

Professor of Psychology, University of Maryland, Director of the Neuroscience and Cognitive Science Program

“Spatial orientation by sonar: What the bat’s voice tells the bat’s brain”

Seminar with Kamal Sen, Ph.D. May 24th

Assistant Professor, Department of Biomedical Engineering, Boston University

“Neural Discrimination of Complex Natural Sounds in Songbirds”

Seminar with Brad May, Ph.D. May 25th

Professor, Department of Otolaryngology, Johns Hopkins University

“The auditory representation of spectral cues for sound localization”

 

June 2007

Seminar with Dominic Mangiardi June 1st

Ph.D. Candidate in Biomedical Engineering, Boston University

“Molecular and quantitative spatial analysis of aminoglycoside-induced hair cell death and regeneration in the avian cochlea”

Dissertation Defense

Seminar with Matthew Goupell, Ph.D. June 25th

Austrian Academy of Sciences, Acoustics Research Institute, Vienna, Austria

“Improving cochlear implant ITD perception”

 

July 2007

Seminar with Chuping Liu, Ph.D. candidate July 6th

University of Southern California
Department of Auditory Implant and Perception at House Ear Institute
Research Laboratory of Electronics at MIT

“Speech Perception Optimization for Cochlear Implants through Signal Processing Approaches”

Seminar with Lisa Shatz, Ph.D. July 13th

Department of Electrical Engineering, Suffolk University

“The response of rat vibrissae to sound”

Seminar with Robert Carlyon July 23rd

MRC Cognition and Brain Sciences Unit, Medical Research Council, UK

“Pitch perception & sound segregation by cochlear implant users and normal-hearing listeners”

Seminar with Ingrid Johnsrude July 24th

MRC Cognition and Brain Sciences Unit, Cambridge UK
Dept. of Psychology, Queens University, Canada

“Perceptual learning and voice familiarity facilitate speech comprehension: Under what conditions, and how?”

 

September 2007

Seminar with Nicole Marrone, Ph.D. candidate September 14th

Sargent College, Boston University

“The benefit of separation between multiple talkers: A comparison of aided and unaided listening in reverberant rooms”

Dissertation Defense

Seminar with Barbara Shinn-Cunningham, Ph.D. September 21st

Associate Professor, Departments of Cognitive and Neural Systems and Biomedical Engineering
Boston University

“Why hearing impairment may degrade selective attention”

Abstract: In everyday settings, the ability to selectively attend is critical for communication. Most normal-hearing listeners are able to selectively attend to a talker of interest in a sea of competing sources, and to rapidly shift attention as the need arises. However, hearing impaired (HI) listeners and cochlear implant (CI) users have difficulty communicating when there are multiple sources. This talk will review experiments investigating selective attention in normal listeners. Results suggest that selective attention operates to select out perceptual “objects,” and thus depends directly on the ability to separate a source of interest from a mixture of competing sources. In turn, results suggest that one important factor affecting how well hearing impaired listeners can communicate in everyday settings is their ability to perceptually organize the auditory scene.

Seminar with Daniel E. Shub, Ph.D. September 28th

Department of Psychology
University of Pennsylvania

“Psychophysical spectro-temporal receptive fields in an informational masking task”

Abstract: Traditionally, comparison between psychophysical and physiological data has relied on trial-based approaches were a short stimulus is presented and a response is recorded. Characterizing neurons in the auditory system with trial-based approaches, however, is often less efficient than approaches that utilize continuous stimuli (i.e., not trial based). In this talk, a psychophysical method, analogous to the physiological methods used to estimate the spectro-temporal receptive fields (STRFs) of neurons, is introduced. Human subjects were trained to respond as quickly as possible whenever they detected a target sequence of four 50-ms tone pips with a frequency of 1000 Hz in the presence of a masker. The masker consisted of temporally and spectrally random 50-ms tone pips. The expected number of masker pips at any moment in time was six. The masker was continuously presented and the target was added at random times. Each block lasted for approximately five minutes and there were on average 100 signal presentations during a block. The responses were sorted as either hits or false alarms and then response-triggered averaged spectrograms for the hits and false alarms were calculated. The average stimulus from when false alarms occurred is similar to a noisy version of the target signal; subjects responded that there was a signal about 700 ms after the stimulus had energy at the signal frequency. The measured response-triggered spectrograms are consistent with previous estimates of spectro-temporal weighting patterns from trial-based informational masking tasks. An advantage of the current method is that it provides a means of comparing psychophysical results to physiological STRFs. [Supported by NIH DC02012]

 

October 2007

Seminar with Adrian (KC) Lee, Ph.D. October 12th

Athinoula A. Martinos Center for Biomedical Imaging
Department of Psychiatry, Harvard Medical School

“Influence of Spatial Cues on the Identification and the Localization of Objects in the Auditory Foreground”

Seminar with Psyche Loui, Ph.D. October 26th

Department of Neurology
Beth Israel Hospital / Harvard Medical School

“Rapid Statistical Learning of a New Musical System”

 

November 2007

Seminar with Norbert Kopco, Ph.D. November 2nd

Center for Cognitive Neuroscience, Duke University
Dept. of Cognitive and Neural Systems, Boston University, Technical University of Kosice, Slovakia

“Visual calibration of auditory spatial perception in humans and monkeys”

Seminar with Eric Thompson November 9th

Ph.D. student, Centre for Applied Hearing Research, Technical University of Denmark
Dept. of Cognitive and Neural Systems and Hearing Research Center, Boston University

“Binaural processing of fluctuating interaural level differences”

Abstract: Interaural level fluctuations can be created by amplitude modulations in a reverberant environment due to interaural phase differences in the modulation transfer function. In order to understand how envelopes are processed in reverberant environments, two psychophysical amplitude modulation detection experiments were performed. The first experiment was aimed at measuring a baseline sensitivity to interaural level fluctuations by measuring the minimum modulation depth required to discriminate between interaurally homophasic and antiphasic amplitude modulation imposed on high-frequency pure-tone or narrow-band noise carriers. In addition, ILD modulation frequency tuning curves were obtained by measuring the antiphasic/homophasic AM discrimination thresholds in the presence of masking modulators. In the second experiment, subjective modulation transfer functions were measured monaurally and binaurally with a dichotic impulse response. The results showed that an interaural phase difference in the modulation transfer function can be used to give a binaural advantage over “best ear” listening in a modulation detection experiment.

Seminar with Rapeechai (Pom) Navawongse November 30th

Ph.D. Student, Dept. of Biomedical Engineering, Boston University

“Extracellular Single Neuron Recording in Dorsal Cochlear Nucleus of the Awake Gerbil”

 

December 2007

Seminar with Pierre Divenyi, Ph.D. December 7th

Speech and Hearing Research Laboratory
Department of Veterans Affairs
VA Northern California Health Care System and East Bay Institute for Research and Education
Martinez, California

“Decomposition of speech into articulatory gesture functions: Answers and questions”

Abstract: Speech is produced by an ensemble of articulatory gestures that dynamically change properties of the vocal chord and the vocal tract. Thus, in principle, there must be an algorithmic way of transforming the ensemble of gestures into the speech signal and, conversely, transforming the speech signal into the set of gesture functions that generated it in the first place. The reality, unfortunately, is more complicated because, as shown by many mathematically quite capable researchers, both the transform and its inverse can lead to multi-valued functions. In an ongoing work at our lab, we adapted a procedure of speech synthesis from articulatory gestures (Saltzman, 1986; Saltzman and Kelso, 1987; Browman and Goldstein,1990) to associate the speech signal with the underlying gesture functions. A machine learning experiment performing such association in a number of training tokens shows that gesture functions can be predicted from a test token’s waveform. By considering the ensemble of gesture functions as an equivalent representation of the speech signal, we analyzed listeners’ responses to a series of disyllabic spondee words disfigured by replacing their centers with various non-speech fillers, in terms of the information transmitted by each of the gestures. Such an analysis has the potential of offering way for the construction of confusion matrices with the gestures representing continuous analogs of the Jakobsonian distinctive features, and potentially allows differentiation of bottom-up and top-down processes for the intelligibility of speech presented under inclement acoustic conditions. (Saltzman, E. L. (1986). Task dynamic coordination of the speech articulators: a preliminary model. In H. Heuer, and C. Fromm (Eds.), Generation and modulation of action patterns (Vol. 15, pp. 129-144). New York: Springer-Verlag. Saltzman, E. L., and Kelso, J. A. (1987). Skilled actions: A task dynamic approach. Psychological Rewiew, 94, 84-106. Browman, C. P., and Goldstein, L. (1990). Representation and reality: physical systems and phonological structure. Journal of Phonetics, 18, 411-424.)