Effects of sensorineural hearing loss on formant-frequency discrimination: Measurements and models

Elsevier

Available online 8 May 2023, 108788

Hearing ResearchAuthor links open overlay panel, , , , , , ABSTRACT

This study concerns the effect of hearing loss on discrimination of formant frequencies in vowels. In the response of the healthy ear to a harmonic sound, auditory-nerve (AN) rate functions fluctuate at the fundamental frequency, F0. Responses of inner-hair-cells (IHCs) tuned near spectral peaks are captured (or dominated) by a single harmonic, resulting in lower fluctuation depths than responses of IHCs tuned between spectral peaks. Therefore, the depth of neural fluctuations (NFs) varies along the tonotopic axis and encodes spectral peaks, including formant frequencies of vowels. This NF code is robust across a wide range of sound levels and in background noise. The NF profile is converted into a rate-place representation in the auditory midbrain, wherein neurons are sensitive to low-frequency fluctuations. The NF code is vulnerable to sensorineural hearing loss (SNHL) because capture depends upon saturation of IHCs, and thus the interaction of cochlear gain with IHC transduction. In this study, formant-frequency discrimination limens (DLFFs) were estimated for listeners with normal hearing or mild to moderate SNHL. The F0 was fixed at 100 Hz, and formant peaks were either aligned with harmonic frequencies or placed between harmonics. Formant peak frequencies were 600 and 2000 Hz, in the range of first and second formants of several vowels. The difficulty of the task was varied by changing formant bandwidth to modulate the contrast in the NF profile. Results were compared to predictions from model auditory-nerve and inferior colliculus (IC) neurons, with listeners’ audiograms used to individualize the AN model. Correlations between DLFFs, audiometric thresholds near the formant frequencies, age, and scores on the Quick speech-in-noise test are reported. SNHL had a strong effect on DLFF for the second formant frequency (F2), but relatively small effect on DLFF for the first formant (F1). The IC model appropriately predicted substantial threshold elevations for changes in F2 as a function of SNHL and little effect of SNHL on thresholds for changes in F1.

Section snippetsINTRODUCTION

The importance of speech as a communication signal motivates studies of its neural encoding. In listeners with normal hearing, speech intelligibility is robust across a wide range of sound levels and background noise, and in the presence of temporal and spectral distortions. In contrast, listeners with even relatively small amounts of hearing loss have difficulty understanding speech in noise. A better understanding of neural speech coding would illuminate this difficulty and improve

Listeners

Thirty-four participants (ages 18-79 years, 26 female, 8 male) were recruited and tested using procedures approved by the University of Rochester Institutional Review Board. All participants were initially screened with a standard audiogram (0.25 – 8 kHz; Fig. 2) and were excluded if hearing loss (HL) was asymmetric (> 15 dB difference in HL between the two ears) or if HL was greater than 80 dB. Audiometric thresholds were used as input parameters for the computational models (see below), and

Thresholds vs. HL at formant frequencies

Subject thresholds for the F2 and F1 conditions, as a function of HL and stimulus bandwidth, are presented in Fig. 5; results from ANOVA analysis of linear mixed-effects models for the same data are presented in Table 1. For the F2 condition there was a strong, positive, linear correlation between threshold and HL. These correlations were similar for all three tested bandwidths, and also for both ON and BTW stimuli (see Fig. 5 legend for linear regression R2 values). A significant

DISCUSSION

The goal of this study was to test the hypothesis that NF cues support encoding of formant frequencies in vowels. Two factors that affect NF coding in computational models of midbrain responses to vowels are formant bandwidth and SNHL. The results of manipulating the bandwidth of formants in synthetic vowel-like sounds and by testing listeners with a range of SNHL generally supported the hypothesis. Furthermore, these results showed a surprising insensitivity of F1 discrimination to SNHL and a

Declaration of Competing Interest

None.

ACKNOWLEDGEMENT

Supported by NIH-R01-DC001641

REFERENCES (42)LH Carney et al.Speech Coding in the Brain: Representation of Formants by Midbrain Neurons Tuned to Sound Fluctuations

eNeuro

(2015)

Carney, LH, Kim DO, Kuwada, S (2016) Speech Coding in the Midbrain: Effects of Sensorineural Hearing Loss, in P. van...LH Carney et al.Nonlinear auditory models yield new insights into representations of vowels

Atten Percept Psychophys

(2019)

B Delgutte et al.Speech coding in the auditory nerve: I. Vowel-like sounds

J Acoust Soc Am

(1984)

L. Deng et al.A composite auditory model for processing speech sounds

The Journal of the Acoustical Society of America

(1987)

W.A. Dreschler et al.Relation between psychophysical data and speech perception for hearing-impaired subjects. I

The Journal of the Acoustical Society of America

(1980)

W.A. Dreschler et al.Relations between psychophysical data and speech perception for hearing-impaired subjects. II

The Journal of the Acoustical Society of America

(1985)

Y. Hamza et al.(2022, in review) Representations of fricatives in sub-cortical model responses: comparisons with human perception

BioRxiv

(2022)

A.N. Heeringa et al.Neural coding of the sound envelope is changed in the inferior colliculus immediately following acoustic trauma

European Journal of Neuroscience

(2019)

K.S. Henry et al.Formant-frequency discrimination of synthesized vowels in budgerigars (Melopsittacus undulatus) and humans

JASA

(2017)

M.C. Killion et al.Development of a quick speech-in-noise test for measuring signal-to-noise ratio loss in normal-hearing and hearing-impaired listeners

The Journal of the Acoustical Society of America

(2004)

View full text

© 2023 Published by Elsevier B.V.

留言 (0)

沒有登入
gif