VR 360º subtitles

Designing a Test Suite with Eye-Tracking Technology

Authors

DOI:

https://doi.org/10.47476/jat.v5i2.2022.184

Keywords:

subtitles, immersive environments, 360º videos, testing, eye-tracking

Abstract

Subtitle production is an increasingly creative accessibility service. New technologies allow for placing subtitles at any location of the screen in a variety of formats, shapes, typography, font size, and colour. The screen now affords accessible creativity, with subtitles able to provide novel experiences beyond those offered by traditional language translation. Immersive environments multiply 2D subtitles features to new creative viewing modalities. Testing subtitles in eXtended Reality (XR) has pushed existing methods to address user need and enjoyment of audiovisual content in 360º viewing displays. After an overview of existing subtitle features in XR, the article describes the challenges of generating subtitle stimuli to test meaningful user viewing behaviours, based on eye-tracking technology. The approach for the first experimental setup for implementing creative subtitles in XR using eye-tracking is given, in line with novel research questions. The choices made regarding sound, duration and storyboard are described. Conclusions show that testing subtitles in immersive media environments is both a linguistic and an artistic endeavour, which requires an agile framework fostering contrast and comparison of different functionalities. Results of the present, preliminary study shed light on future experimental setups with eye-tracking.

Lay summary

Subtitling is an increasingly creative accessibility service. Subtitlers use new technologies to place subtitles at any location on the screen. They can also change shape, typography, font size, and colour of the subtitles. This personalisation opens the door for creative solutions and allows novel experiences beyond traditional 2D subtitles. An immersive environment is an illusionary experience that surrounds you and transports you to another place. The challenges when testing subtitles in these new environments are different from those when testing in 2D media, as the user is no longer a passive spectator but an active part of the story. In this article, we give an overview of existing subtitle features in immersive environments. Then we describe the challenges of generating stimuli suitable for testing subtitles in immersive environments using eye tracking technology. We explain the experimental setup used during our experiment as well as the research questions. We describe the choices we made to design our stimuli, such as the sound, the duration, and the storyboard. We conclude that testing subtitles in immersive media environments present linguistics and artistic challenges and that we need to use a framework that allows for rapid contrast of different solutions. Results of the present, preliminary study clear up how to use eye-tracking technology to test subtitles in immersive environments.

Downloads

Download data is not yet available.

Author Biographies

Marta Brescia-Zapata, Universitat Autònoma de Barcelona

A PhD candidate in the Department of Translation, Interpreting and East Asian Studies at the Universitat Autònoma de Barcelona. She holds a BA in Translation and Interpreting from Universidad de Granada and an MA in Audiovisual Translation from UAB. She is a member of the TransMedia Catalonia research group (2017SGR113), where she collaborates in two H2020 projects: TRACTION (Opera co-creation for a social transformation), and GreenScent (Smart Citizen Education for a greeN fuTure). She is currently working on subtitling for the deaf and hard of hearing in immersive media, thanks to a PhD scholarship granted by the Catalan government. She is the Spanish translator of Joel Snyder’s AD manual “The visual made verbal”, and also collaborates regularly as subtitler and audio describer at the Festival INCLÚS.

Krzysztof Krejtz, SWPS University of Social Sciences and Humanities

Krzysztof Krejtz is a psychologist at SWPS University of Social Sciences and Humanities in Warsaw, Poland, where he is leading the Eye Tracking Research Center. In 2017 he was a guest professor at Ulm University, in Ulm, Germany. He gave several invited talks at e.g., Max-Planck Institute (Germany), Bergen University (Norway), and Lincoln University Nebraska (USA). He has extensive experience in social and cognitive psychology research methods and statistics. In his research, he focuses on the use of eye tracking method and developing a second-order eye data-based metrics that may capture the dynamics of attention and information processing processes (transitions matrices entropy, ambient-focal coefficient K), dynamics of attention process in the context of Human-Computer Interaction, multimedia learning, media user experience, and accessibility. Dr. Krejtz is a member of the ACM Symposium on Eye Tracking Research and Application (ACM ETRA) Steering Committee.

Pilar Orero, Universitat Autònoma de Barcelona

Professor Pilar Orero, PhD (UMIST, UK) works at Universitat Autònoma de Barcelona (Spain) in the TransMedia Catalonia Lab. She has written and edited many books, near 100 academic papers and almost the same number of book chapters —all on Media Accessibility. Leader and participant on numerous EU funded research projects focusing on media accessibility. She works in standardisation and participates in the UN ITU IRG-AVA - Intersector Rapporteur Group Audiovisual Media Accessibility, ISO and ANEC. She has been working on Immersive Accessibility for the past 4 years first in a project called ImAc, which results are now further developed in TRACTION, MEDIAVERSE, MILE, and has just started to work on green accessibility in GREENSCENT. She leads the EU network LEADME on Media Accessibility.

Andrew T. Duchowski, Clemson University

Dr Andrew Duchowski is a professor of Visual Computing at Clemson University. He received his baccalaureate (1990) from Simon Fraser University, Burnaby, Canada, and doctorate (1997) from Texas A&M University, College Station, TX, both in Computer Science. His research and teaching interests include visual attention and perception, eye tracking, computer vision, and computer graphics. He joined the School of Computing faculty at Clemson in January, 1998. He has since produced a corpus of publications and a textbook related to eye tracking research, and has delivered courses and seminars on the subject at international conferences. He maintains Clemson's eye tracking laboratory, and teaches a regular course on eye tracking methodology attracting students from a variety of disciplines across campus.

Chris J. Hughes, University of Salford

Dr Chris Hughes is a Lecturer in the School of Computer Science at Salford University, UK. His research is focused heavily on developing computer science solutions to promote inclusivity and diversity throughout the broadcast industry. This aims to ensure that broadcast experiences are inclusive across different languages, addressing the needs of those with hearing and low vision problems, learning difficulties and the aged. He was a partner in the H2020 Immersive Accessibility (ImAc) Project. Previously he worked for the UX group within BBC R&D where he was responsible for developing the concept of responsive subtitles and demonstrated several methods for automatically recovering and phonetically realigning subtitles. He has a particular interest in accessible services and is currently focused on developing new methods for providing accessibility services within an immersive context, such as Virtual Reality and 360º video.

Downloads

Published

2022-12-21

How to Cite

Brescia-Zapata, M., Krejtz, K., Orero, P., Duchowski, A., & Hughes, C. (2022). VR 360º subtitles: Designing a Test Suite with Eye-Tracking Technology. Journal of Audiovisual Translation, 5(2), 233–258. https://doi.org/10.47476/jat.v5i2.2022.184