Anatomy-based fitting improves speech perception in noise for cochlear implant recipients with single-sided deafness

This prospective interventional study was conducted with approval from the Ethics Committee at the Medical University of Würzburg (ethics approval number: 204/20) and in accordance with the Declaration of Helsinki. All participants provided verbal and written informed consent prior to the start of the study.

Participants

All of the following inclusion criteria applied: (1) to have a post-operative flat panel volume computed tomography (fpVCT) image with a secondary reconstruction of 99 μm [25]; (2) to be at least 18 years old at the start of the study; (3) to have postlingual-onset SSD with normal hearing on the contralateral ear. SSD is defined as a mean pure-tone average (PTA) threshold at frequencies of 0.5,1,2, and 4 kHz of ≥ 70 dB HL in the poorer ear and of ≤ 30 dB HL in the better ear (interaural threshold gap ≥ 40 dB HL) (following the SSD classification of Van de Heyning et al. [1]; 4) to have at least 6 months of experience with a MED-EL SONNET 2 or RONDO 3 audio processor; 5) to have at least ten active intracochlear electrode contacts; 6) to use the FSP, FS4, or FS4-p sound coding strategy; 7) to have either a CI-aided speech perception score of ≥ 25% on a monosyllable perception test at 65 dB SPL or a CI-aided speech reception threshold (SRT) of ≤ 20 dB SNR on a sentence-in-noise perception test; 8) to be willing and able to give feedback on the fitted map; and 9) to give their signed and dated informed consent before participating in any study-related procedures. Candidates that did not fulfill all inclusion criteria were excluded. No users of electric acoustic stimulation (EAS) CI devices were included, as the absence of residual hearing in the implanted ear was a prerequisite.

ABF procedure

The post-operative fpVCT-SECO images were imported into the OTOPLAN software (Version 3). Cochlear duct length at level of the organ of Corti (OC) was calculated via the elliptic-circular approximation method [26]. The positions of each intracochlear electrode contact along the OC was then measured from the center of the round window. OTOPLAN Version 3 adds an adjustment of 2.5 mm to account for the presence of the cochlear hook region to ensure that these measurements do not systematically underestimate the insertion depth of the electrode contacts [27]. No further manual adjustments were then made. This information was then imported into the MAESTRO clinical fitting software (Version 9.0.5). Individual electrode contacts were displayed within the manufacturer’s pre-set frequency band distribution (70–8500 Hz) of the audio processor (SONNET2 / RONDO3). This was used as the basis of the ABF frequency band allocation scheme. Most Comfortable Levels (MCL) were adapted to reduce any unfavorable sounds. Contrary to Di Mario et al. [28], who only applied the ABF information in the software without any modification, we adapted particularly the first and second frequency band when necessary to physically match the real electrode contact to the frequency band in the audio processor.

In some participants the tonotopic (OC) frequency of the apical electrode contact was displayed outside the first frequency band. This can happen in cases where the first contact does not reach the apex (at around 620°-720°). To cover the OC frequency of the designated electrode contact, the lower frequency limit of the filter bank in the audio processor was shifted to allow tonotopic matching as illustrated in Fig. 1. This results in a cut-off of frequencies to provide stimulation at around 350 Hz within the frequency filter, as the real electrode contact lies within this frequency region (e.g. ID 5).

Fig. 1figure 1

The colored bands represent the acoustic frequency distributions ranging from 16–16.000 Hz and the electric frequency distributions ranging from 70–8500 Hz in the CI audio processor for the right ear in red (R) and the left ear in blue (L). The left graph illustrates an allocation of the full electric frequency range (in light blue) from 70–8500 Hz in the CBF map, resulting in a significant mismatch at low frequencies. The right panel illustrates the ABF frequency allocation with the electric lower frequency limit (red line) in the CI map upshifted to reduce the mismatch

Assessment

Participants used their personal audio processers during the study. All assessments were conducted in a sound-isolated room. The participants were seated in the center of an array of nine loudspeakers (M52 Klein & Hummel, Georg Neumann GmbH, Berlin, Germany). These were equidistantly spaced in the frontal hemifield equally with a radius of 1.5 m. These were labelled 1–9 from the participant’s left to right. In this study, only loudspeakers 1 (-90° azimuth), 5 (0° azimuth), and 9 (90° azimuth) were used. The loudspeakers were connected to a pre-amplification system (Scarlett 18i20 2nd Generation, Focusrite, High Wycombe, UK). Speech perception tests were conducted using a custom program implemented in MATLAB (MathWorks, Natrick, MA, USA).

Testing took place at two intervals: a baseline interval and a one-month post-baseline interval. During baseline testing, participants used their accustomed CBF map. After baseline testing, participants were fitted using ABF and asked to use only their ABF map for 4 weeks. At the one-month post-baseline interval, testing was performed with the ABF map using the same measures as the baseline interval.

Speech understanding in quiet

Speech understanding in quiet was assessed using the German Freiburg Monosyllables Test material presented at 65 dB SPL [29] with the speech signal presented from the front (S0). Three listening conditions were assessed in free field: unilaterally to the CI via direct input; unilaterally to the NH ear with the CI off; and to both the NH ear and CI. Scores are reported as percentage correct responses. In clinical routine, masking of the contralateral ear for speech in quiet measurements is effective with earplugs and earmuffs or with an insert earphone presenting constant noise. In this study, the impact of mapping on speech perception in quiet could only be assessed by using direct audio input to preclude the effect of possible over masking. With the direct CI input, loudness matching took place prior to speech testing in quiet to ensure audibility of the streamed input. If required, the MCLs were adjusted.

Speech understanding in noise

Speech understanding in noise was assessed using the Oldenburg MATRIX sentence perception test with a variable noise (OLSA noise) presentation level and a fixed speech presentation level (at 65 dB SPL) [30]. The speech signals were meaningful sentences constructed from the MATRIX lists. For each stimulus-response trial, the percent of correctly identified words in each sentence was recorded. An adaptive testing procedure was used to estimate the signal-to-noise ratio (SNR) at which 50% of the words were correctly reported. The order of the presentations and the test lists were randomized to minimize the effects of training and fatigue.

To quantify the binaural hearing effects of squelch and spatial release from masking (SRM), the Oldenburg MATRIX test was conducted in two spatial configurations: S0N0 (speech and noise presented from 0° azimuth) and S0NCI (speech presented from 0° azimuth; noise directed to the CI ear at ± 90° azimuth).

Binaural effects

Binaural effects were calculated following the protocol of van de Heyning et al. [1].

The squelch effect is the benefit of listening binaurally with a CI and NH ear relative to listening with an NH ear alone (as in untreated SSD) when the speech and noise are spatially separated:

Squelch (dB) = SRT50 (S0NCI) NH-only - SRT50 (S0NCI) CI + NH.

The summation effect is the benefit of listening binaurally with a CI and NH ear relative to listening with an NH ear alone when the speech and noise are collocated:

Summation (dB) = SRT50 (S0N0) NH-only - SRT50 (S0N0) CI + NH.

Spatial release from masking is the benefit of listening when speech and noise are spatially separated relative to when speech and noise are collocated:

SRM (dB) = SRT50 (S0N0) NH-only - SRT50 (S0NCI) CI + NH.

Self-perceived sound quality

Participants’ self-perceived rating of sound quality with each map was assessed using the Hearing Implant Sound Quality Index (HISQUI19) [31]. The HISQUI19 consists of 19 questions answerable on a scale from “Always”, (7 points) to “Never”, (1 point), with an additional option of “Not applicable” (0 points). The total score is obtained by adding the numerical values of all 19 questions. Results are qualified as follows: <30 points indicates “very poor sound quality”, 30–60 points indicates “poor sound quality”, 61–90 points indicates “moderate sound quality”, 91–110 points indicates “good sound quality”, and > 111 points indicates “very good sound quality”.

Statistical analysis

Descriptive statistics (mean ± standard deviation, SD) were used to report demographic and clinical characteristics (e.g., age at testing, CI hearing experience with each ear). Descriptive statistics were also used to describe the study outcomes. Results were normally distributed according to the Kolmogorov-Smirnov test and the Shapiro-Wilk test.

Paired samples t-tests were used to assess whether the differences in test outcomes with the CBF and ABF maps were significantly different. Separate tests were conducted for each listening condition and each speech in noise configuration. p-values < 0.05 were regarded as significant. To account for multiple comparisons, p-values were adjusted using the Holm-Bonferroni method per test outcome.

Statistical analysis was implemented with SPSS Statistics (Version 25, IBM, Armonk, New York, USA). Figures were prepared with Prism (Version 8.1, GraphPad Software, San Diego, USA).

留言 (0)

沒有登入
gif