AI model for predicting adult cochlear implant candidacy using routine behavioral audiometry

Since first being approved by the FDA in 1985, over 1 million cochlear implant surgeries have been performed [[1], [2], [3], [4]], rendering cochlear implants the most successful and widely used neural prosthesis worldwide [5]. Though considered the treatment of choice for hearing rehabilitation of moderate to profound sensorineural hearing loss [6], only 2–13 % of adults who are eligible for cochlear implantation actually receive one [7,8]. This statistic is particularly alarming considering data potentially implicating significant secondary health sequala with untreated or undertreated hearing loss, including social isolation [9], depression [10], falls [10], reduced productivity and employability, cognitive impairment and dementia [10], among others.

Though behavioral audiometry is a routine and standardized diagnostic evaluation that is readily available across the US and most other developed countries, understanding who might qualify for a cochlear implant based on this test is challenging, even for hearing health professionals. Currently, cochlear implant candidacy testing requires a more in-depth standardized testing battery consisting of best-aided word and sentence recognition testing, often in quiet and in noise using escalating signal-to-noise ratio presentations, and this is generally only performed by select centers. Driven by a prevailing misconception that adults must be “completely deaf” before considering cochlear implantation, and related fears surrounding referring someone for candidacy testing who might not qualify, there is often a delay of >10 years between initial candidacy and cochlear implantation which is important considering that degree of residual hearing and duration of deafness are two primary predictors of outcome [1]. Furthermore, the current prototypical adult cochlear implant candidate scores are between 10 and 20 % on CNC word and AzBio sentence testing preoperatively in the ipsilateral ear, which is substantially poorer than accepted criteria or thresholds for insurance coverage [[11], [12], [13]].

Recognizing the vital gap between behavioral audiometry and formal cochlear implant speech perception testing results, there has been growing interest in using standard behavioral audiometry to accurately identify people most likely to qualify to facilitate timely referral for formal candidacy testing. Initial attempts have focused on “rules of thumb” based on pure tone average (PTA) and word recognition score, such as the “75/40” rule by Gubbels et al. [14] and the “60/60” rule presented by Zwolan et al. [15]. These methods are attractive because they are scalable and can be readily implemented by people with minimal experience with cochlear implant care. A limitation of these systems is the binary “yes/no” output (i.e., predicts candidacy or no candidacy) without an associated probability or predicted speech perception score. Furthermore, these benchmarks are tied to specific candidacy criteria, which are expected to incrementally evolve over time.

The objective of the current presented work is to develop an adaptive AI-based model that incorporates conventional unaided audiogram output (i.e., standard frequency-specific hearing thresholds and word recognition scores in quiet) to predict aided CNC monosyllabic word scores and AzBio sentence scores in quiet. This model may allow users to adjust the candidacy cutpoints based on the intended use and to flexibly adapt model parameters based on evolving candidacy guidelines.

留言 (0)

沒有登入
gif