P70 Humanoid robot as an interface for autonomous auditory testing through speech audiometry
Human-robot interaction (HRI) is a multidisciplinary research field where some studied parameters are the acceptance, trust and engagement that humans show towards a robot while performing a specific task with it. Our previous studies show promising results with different participants (normal hearing adults and hard-of-hearing children) in relation to maintaining a constant engagement throughout auditory testing when interacting with a robot, which is a current challenge due to the repetitiveness of the tests.
We propose a humanoid NAO robot as a social interface, coupled with the open-source Dutch instance of the Kaldi speech recognition toolkit, to autonomously run two speech audiometry tests: the digits-in-noise (DIN) test and the Nederlandse Vereniging voor Audiologie (NVA) phoneme-in-word lists. In both tests, normally a clinician is in charge of controlling the presentation of each stimulus and scoring each answer. The DIN test assesses speech perception in noise, quantified by the number of correctly identified digits presented in background noise; whilst the NVA phoneme lists assess speech perception, quantified by the number of correctly identified phonemes embedded in 3-phoneme CVC words and presented at different intensity levels.
This study focuses on the evaluation of the robot as an autonomous testing interface, presenting all stimuli and scoring each of the participant’s responses accordingly. The main aim is to offer a portable system and a positive experience through a social interaction, ensuring replicability and aiming to avoid work overload amongst clinicians.
Our HRI consists of an introductory participant-robot conversation, the explanation of the test, a training phase for participants to learn how to record their responses during the test and the speech audiometry test. The interaction will be video-recorded from two angles: one focusing on the participant’s facial expressions and another on their body language. The participants’ perception of the robot will be evaluated through the analysis of the verbal and non-verbal interactions, and through a questionnaire that shall be completed by the participant at the end of the interaction. The overall system (the speech recognition software and testing algorithm) will be assessed by comparing its performance (recognised words and computed scoring) to an offline evaluation by a clinician.
Funding: VICI grant 918-17-603 from the Netherlands Organization for Scientific Research (NWO) and the Netherlands Organization for Health Research and Development (ZonMw), and the Heinsius Houbolt Foundation and the Rosalind Franklin Fellowship from University Medical Center Groningen, University of Groningen.