P69 Comparison of a humanoid robot interface and a laptop interface used for auditory psychophysical tasks
Maintaining engagement during long and arduous speech perception tasks can be difficult, potentially resulting in a loss of attention and reduced performance. However, conducting these tasks is beneficial to better understanding speech perception mechanisms. Currently, a laptop or computer interface is used to perform these tasks through which auditory stimuli are presented, and responses are given either verbally, or more interactively using a mouse click. To provide a higher level of interaction, we propose the use of a humanoid NAO robot as a potential interface for performing auditory psychophysical tasks.
Using four speech perception tasks evaluating voice cue sensitivity, voice gender categorisation, vocal emotion identification, and speech-on-speech perception; presented in the form of games, 28 normal hearing adults performed the tasks using both the laptop and the NAO robot. Stimuli were processed concurrently during the voice cue sensitivity task on both interfaces by altering the voice cues of fundamental frequency (F0) or vocal tract length (VTL), and pre-processed for all other tasks using predefined F0 and VTL values. Stimuli were presented through the loudspeakers on the laptop and the speakers on NAO. Responses were logged using the mouse and clicking corresponding buttons on the laptop screen, and using the tactile sensors on the hands and head of the robot.
As a first step towards determining if the robot could be used as a reliable psychophysical performance measure interface for audio stimuli, the results of the perception tasks obtained from both interfaces were compared. Results indicate that of the four tasks, three showed no significant difference between participants’ performances on the two interfaces, and only the voice cue sensitivity task was significantly different. Many limitations could affect the ability of the NAO to reliably collect data such as the quality of the robot’s speakers, delays in stimulus processing and response logging, or the location of the tactile sensors in a three-dimensional space compared to buttons on a laptop screen.
Despite the potential limitations, the robot was still able to provide similar data to the current laptop interface. Future studies will investigate the perception of the robot by the participants and what factors, if any, could contribute to observable performance measure differences between the two interfaces.