Sixin Liao, Macquarie University (Australia)
Jan-Louis Kruger, Macquarie University (Australia)


Erik Reichle, Macquarie University (Australia)
Lili Yu, Macquarie University (Australia)


Subtitle reading is different from normal reading of static texts in many ways. Apart from having no control over the presentation pace of the text, viewers need to read subtitles while having to process other sources of information from different channels, including the image and on-screen dynamic texts (i.e., subtitles) from the visual channel, and spoken dialogue from the auditory channel. As these sources of information often convey similar or identical content, they are at times redundant to each other in varying degrees. These redundancies between the different modes could have a significant impact on the way viewers prioritize, organize and integrate information. Subtitle processing could be affected by whether or not the subtitles are in the same language as the dialogue and the degree to which the viewer understands the language in the subtitles or dialogue. For instance, there may be a difference between processing subtitles in the first language when the dialogue is also in the first language than when the dialogue is in the second language. 

This study therefore sets out to investigate how different degrees of redundancy generated by information in different modalities (audio vs. visual) and languages (native language/L1 vs. second language/L2) could impact on subtitle reading patterns.  The study has the following research questions: 

R1: What is the impact of the presence or absence of L1 audio on the processing of L1 or L2 subtitles? 

R2: What is the impact of the presence or absence of L2 audio on the processing of L1 or L2 subtitles?

To address these questions, thirty Chinese native speakers who use English as their second language were recruited to participate in an eye tracking experiment. This is a 2 (Subtitle language: Chinese, English) by 3 (Audio type: Chinese, English, No Audio) experimental design, with the No Audio conditions being the baseline. In other words, each participant watched 6 videos while their eye movements were recorded: Chinese subtitles only, English subtitles only, Chinese subtitles and Chinese audio, Chinese subtitles and English audio, English subtitles and Chinese audio, and English subtitles and English audio. After watching each video, they were asked to complete a multiple-choice comprehension test. Linear Mixed Models (LMMs) were used to analyse research data. We looked at global sentence reading and local word-based measures such as reading times, fixation duration, forward saccade length, number of fixations, regressions, etc., as well as the word-frequency effect to examine the impact of language and mode on subtitle processing.



Subtitle processing, cognitive load, multiple redundancies, multimedia learning