13th Speech in Noise Workshop, 20-21 January 2022, Virtual Conference 13th Speech in Noise Workshop, 20-21 January 2022, Virtual Conference

P21 Pupil measures predictive of hearing status-related listening effort as an output of the machine learning classification framework

Patrycja Książek
Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology – Head and Neck surgery, Ear & Hearing, Amsterdam Public Health research institute, Netherlands | Eriksholm Research Centre, Snekkersten, Denmark

Adriana A. Zekveld
Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology – Head and Neck surgery, Ear & Hearing, Amsterdam Public Health research institute, Netherlands

Dorothea Wendt
Eriksholm Research Centre, Snekkersten, Denmark | Department of Health Technology, Technical University of Denmark, Denmark

Thomas Koelewijn
Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, Netherlands

Sophia E. Kramer
Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology – Head and Neck surgery, Ear & Hearing, Amsterdam Public Health research institute, Netherlands

(a) Presenting
(b) Attending

An increasing number of pupillometry studies have shown that hearing impaired (HI) listeners may differ from normal hearing (NH) peers in the effort spent during listening. While multiple pupil measures have been shown to be sensitive to the hearing status, it is currently not clear whether these measures relate to hearing-related changes in speech processing. Since changes in auditory processing are present every time HI listeners attempt to process the speech, it is important to identify which, if any, pupil measures (e.g., mean pupil dilation, principal components) are most distinct for HI versus NH listeners at the trial-level. The next step would be to test whether these measures are generalizable across listening situations. In this study, we tested feasibility to reliably distinguish HI versus NH listeners based on a collection of pupil measures recorded in adverse listening conditions. Besides, we investigated the relative predictive value of these measures in classifying hearing status. We used a machine learning classification framework to classify hearing status (NH; HI) based on trial-level pupil responses recorded during a speech-in-noise test. Data were collected by Koelewijn et al. (2012, doi:10.1155/2012/865731; 2014, doi:10.1121/1.4863198) in 32 NH (31-76years) and 32 HI (40-70years) listeners. We used commonly used measures of listening effort (Peak Pupil Dilation, Mean Pupil Dilation, Pupil Baseline) as well as temporal measures (Principal Component Analysis, Independent Component Analysis). To identify pupil measures specific to hearing status and furthermore determine measures’ sensitivity to performance and signal-to-noise ratio (SNR), we performed three classification tasks on subsets of pupil responses. We either included all trials at two certain average intelligibility levels, or included only correct trials from the same average intelligibility levels, or included correct trials at a single SNR level. Lastly, we ranked pupil measures based on their importance in the classification process. We expected pupil measures to differentiate the hearing groups, especially when fixating performance and SNR. As hypothesized, classification performance was always above the baseline prediction. This indicates that pupil measures tap into differences in speech processing between the tested groups (HI, NH). Fixating performance or SNR did not increase the classification performance. Some measures (e.g., second principal component) was found to be important across classification conditions. Our results indicate that a machine learning classification framework might be able to aid in the automatic detection of hearing status based on the pupil responses recorded in a speech-in-noise test.

Last modified 2022-01-24 16:11:02