13th Speech in Noise Workshop, 20-21 January 2022, Virtual Conference 13th Speech in Noise Workshop, 20-21 January 2022, Virtual Conference

P20 The impact of visual, acoustic and semantic cues on processing of face mask speech by children and adults

Julia Schwarz, Katrina K. Li, Jasper Hong Sim, Yixin Zhang
University of Cambridge, Cambridge, UK

Elizabeth Buchanan-Worster
MRC Cognition and Brain Sciences Unit, Cambridge, UK

Brechtje Post, Jenny Gibson, Kirsty McDougall
University of Cambridge, Cambridge, UK

(a) Presenting
(b) Attending

Emerging research indicates that face masks can cause language processing difficulties (Brown, van Engen & Peelle, 2021, Cogn. Research 6(1):49). However, it is still unclear to what extent these difficulties are caused by the visual obstruction of the speaker’s mouth or by changes to the acoustic signal. Moreover, research in this area has so far concentrated on adults’ masked speech perception, but not children’s. The present study investigated the extent to which children and adults process masked speech more slowly than normal speech, whether this effect is due to missing visual cues or acoustic degradation, and whether the effect is reduced in sentences with high semantic predictability. Since children are somewhat less experienced in using semantic cues for predictive speech processing (Hahne, Eckstein & Friederici , 2004, JoCN 16(7):1302), they could be affected differently by these combined factors than adults. The study was conducted on the online experiment platform Gorilla. Videos of a female British English (BE) speaker were presented to BE children (age 8-12) and adults (age 20-60). Participants performed a cued shadowing task in which they had to repeat the last word of the English sentences presented in the videos. Target words were embedded in sentence-final position and manipulated visually, acoustically, and by cloze probability (high/low predictability of the target word; Kalikow, Stevens & Elliott, 1977, JASA 61(5):1137). In order to capture millisecond-accurate voice response times online, a sound signal was embedded at the beginning of each sentence and recorded together with participants’ vocal responses. Reaction times were then extracted automatically (combined with manual corrections) and analysed with mixed effects models. First results from 16 adults and 16 children (half the sample size) showed that listening to speech through face masks slowed down processing in both groups. However, visual and acoustic mask effects individually were very small (10ms), and only in combination displayed a moderate effect size (40ms). Visual, acoustic, and semantic cues all significantly reduced adverse mask effects, suggesting that listeners can compensate for acoustic changes by utilising both visual and semantic cues. Although children were less proficient in predictive processing overall, they were still able to use semantic cues to compensate for face mask effects in a similar fashion to adults. These findings provide novel information on the integration of multiple linguistic cues in adverse listening conditions across the lifespan and have practical implications for improving communication with face masks in educational settings such as classrooms.

Last modified 2022-01-24 16:11:02