13th Speech in Noise Workshop, 20-21 January 2022, Virtual Conference 13th Speech in Noise Workshop, 20-21 January 2022, Virtual Conference

P05 Effect of simple visual inputs on syllable parsing

Anirudh Kulkarni, Mikolaj Kegler
Imperial College London, United Kingdom

Tobias Reichenbach
Imperial College London, United Kingdom | Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany

(a) Presenting

Speech comprehension, especially in difficult listening situations, is affected by visual signals, such as those arising from a talker's face. From a neuroscientific perspective, this multisensory processing takes place at a stage as early as the primary auditory cortex. However, the neural mechanisms behind this audio-visual integration are poorly understood. Here we utilize a computational model of a cortical microcircuit for speech processing to understand how visual input can be incorporated into it. The model consists of a cross coupled excitatory and inhibitory neural population that generates a theta rhythm. This theta rhythm parses the syllable onsets of a speech input. To investigate the effect of visual input on parsing syllables, we add simple visual currents to the model that are proportional to one of the following: (1) the rate of syllables, (2) the mouth-opening area of the speaker or (3) the velocity of the mouth area of the speaker. We find that adding visual currents to the excitatory neuronal population affects speech comprehension, either improving it or deteriorating it, depending on whether the currents are excitatory or inhibitory and depending on audio-visual timing. In contrast, adding visual input currents to the inhibitory population does not affect speech comprehension. Our results, therefore, suggest neural mechanisms for audio-visual integration and make testable experimental predictions.

Last modified 2022-01-24 16:11:02