P28 Sentential contextual facilitation of auditory word processing builds up during speech parsing
During the parsing of auditory speech, auditory input is processed more effectively near the end (vs. beginning) of sentences. It is still unclear from which processing level these temporal dynamics in auditory processing originate. We investigated whether auditory word-processing dynamics during sentence parsing can be driven exclusively by predictions derived from sentential context. We presented listeners with auditory stimuli consisting of word sequences (arranged into either coherent sentences or unstructured, random word sequences) and a continuous distractor tone. We recorded reaction times (RTs) and frequency-tagged neuroelectric responses (auditory steady-state responses, ASSRs) to individual words at multiple temporal positions within the sentences, and quantified sentential context effects at each position while controlling for individual word characteristics (i.e., phonetics, frequency, and familiarity). We found that sentential context increasingly facilitates auditory word processing on linguistic and acoustic levels as evidenced by accelerated RTs and increased ASSRs across words within the sentences. These purely top-down contextually driven auditory word-processing dynamics were modulated by the syntax of the speech and occurred only when the listeners focused their attention on the speech. Moreover, they did not transfer to the auditory processing of the concurrent distractor tone. These findings indicate that dynamic linguistic and acoustic processing of auditory input during speech parsing can be driven exclusively by sentential predictions. The predictions may be shaped by the syntax of the speech, require the listener to actively parse the speech, and affect only the processing of the parsed speech, not that of concurrent yet perceptually separate auditory streams.