Language and eye-tracking seminar at NorS

Eyetracking

The Linguistics Lab at the Department of Nordic Studies and Linguistics (NorS) is now equipped with a new Eyelink 1000 eye-tracker! To celebrate the purchase of this new highly accurate and precise eye-tracker, we invite you to attend our seminar on language and eye-tracking. The goal of the event is to promote and inspire eye-tracking research at HUM. Local and international speakers will talk about their eye-tracking projects. The projects include a wide range of basic and applied research fields including sentence processing, child language, machine learning, second language studies and applied hearing research.

Invited speakers: Lena Jäger (University of Zurich), Marcus Nyström (Lund University), Katrine Falcon Søby (University of Copenhagen), Dorothea Wendt (Ericksholm Research Centre), Fabio Trecca (Aarhus University)

All researchers and students are welcome to attend. Participation is free of charge for registered participants. Please register before 24 January 2022

Currently, the event is planned to take place physically at South Campus.

All talks will be in English.

 

13.00

Nora Hollenstein (UCPH)

Introduction: Eye-tracking research in natural language processing and at NorS.

13.30

Lena Jäger (University of Zurich)

Eye tracking-based reader identification.

14.15

Marcus Nyström (Lund University)

Event classification in eye-tracking data – from hand coding to deep learning.

15.00

Coffee break

15.30

Katrine Falcon Søby (UCPH)

Tracking native speakers’ processing of anomalous learner syntax.

16.15

Dorothea Wendt (Ericksholm Research Centre)

Assessing pupil dilation and eye-movements to investigate listening effort.

17.00

Fabio Trecca (Århus University)

Using eye tracking to analyze word segmentation in fluent speech in Danish children under the age of three years.

17.45

End of programme.

 

 

13.00-13.30: Eye-tracking research in natural language processing and at NorS

Nora Hollenstein, Center for Language Technology, University of Copenhagen, Denmark

First, I will give an overview of the current applications of eye-tracking in natural language processing applications and show how eye movement data can be used to improve and evaluate computational language models. Moreover, I will present our own eye-tracking research at NorS in the context of two projects: The creation of a new eye-tracking corpus from natural reading and using existing eye-tracking data to improve our understanding of computational models.

13.30-14.15: Machine learning methods for eye tracking data: Cognitive models and deep neural network architectures

Lena Jäger, Department of Computational Linguistics, University of Zurich, Switzerland

The way we move our eyes is highly informative about the (often unconscious) processes that unfold in our minds. In this talk, I will present methods for the analysis of eye tracking data that allow us to make inferences or predictions about the viewer or the stimulus. As an exemplary problem setting, I will focus on reader/viewer identification. First, I will demonstrate how we can make use of psycholinguistic domain knowledge encoded in a generative cognitive model to resolve a discriminative problem setting (here: viewer identification) by deriving a Fisher kernel from the cognitive model. Second, I will present a deep neural network architecture that extracts informative embeddings directly from the raw (non-preprocessed) eye tracking signal. For the task of user identification, the proposed DeepEyedentification network outperforms previous approaches by one order of magnitude in terms of identification error rate and two orders of magnitude in terms of time needed for identification. 

14.15-15.00: Event classification in eye-tracking data – from hand coding to deep learning

Marcus Nyström, Lund University Humanities Lab, Lund University, Sweden

A critical part of many analyses of eye-tracking data includes dividing the raw data into periods of events such as fixations and saccades. During this presentation, I will talk about how to approach the problem of event classification from a theoretical and practical perspective, departing from some of my recent work on this topic. The goal is to provide the listeners with sufficient knowledge and practical tools to successfully use event classification in their own research.

15.30-16.15: Tracking native speakers’ processing of anomalous learner syntax

Katrine Falcon Søby, University of Copenhagen, Denmark

What are the consequences of using anomalous learner syntax in written production aimed at native speakers? Second language learners of Norwegian, and other verb second (V2) languages, frequently place the verb in third position (e.g. *Adverbial-Subject-Verb) although it is mandatory in second position (Adverbial-Verb-Subject). In our eye tracking study, native Norwegian speakers read sentences with either grammatical verb second or ungrammatical verb third (V3) word order. Unlike previous eye tracking studies of ungrammaticality, which have primarily addressed morphosyntax, we exclusively manipulate word order with no morphological or semantic changes. We find that native speakers react immediately to ungrammatical V3 word order, as they regress more from the subject and verb and display increased fixation durations, both for early and late measurements. Participants recover quickly, already on the following word. We also find habituation effects, especially for the ungrammatical sentences, suggesting that V3 word order is less disruptive after multiple exposures. The effects of grammaticality are unaffected by the length of the initial adverbial. As ungrammatical V3 word order is highly noticed and slow down processing, we argue that it is relevant for teachers and learners of V2 languages to focus on the acquisition of word order.

16.15-17.00: Assessing pupil dilation and eye-movements to investigate listening effort

Dorothea Wendt, Eriksholm Research Centre, Snekkersten, Denmark; Department of Health Technology, Technical University of Denmark, Denmark

Communication and speech perception in everyday life has been reported of being effortful for people with hard of hearing. Consequences of increased listening effort can be, for example, higher levels of mental distress and fatigue leading to stress, greater need for recovery after work, or increased incidence of stress-related sick leave (Gatehouse and Gordon, 1990; Kramer et al., 2006; Edwards, 2007; Hornsby, 2013). Hence, there is a growing interest in the field of audiology in identifying factors that cause such difficulties during speech perception. More specifically, the assessment of listening effort has become of increasing interest since it has been shown to provide an additional dimension to evaluate speech perception in people with hearing loss. In this talk, I will present two different methods based on oculometric measures namely eye-movements position and pupil size to study listening effort in people with hearing impairment and to examine the impact of several factors such as interfering background noise, linguistic complexity, and motivation.

17.00-17.45: Using eye tracking to analyze word segmentation in fluent speech in Danish children under the age of three years

Fabio Trecca, Aarhus University, Denmark

Understanding spoken language requires, among other skills, the ability to segment continuous speech into its constituent words. Children master this skill very early on in life. However, phonetic properties of the speech input can make some sentences harder to segment than others. In this talk, I will present data from two eye-tracking studies with 2 to 3-year-old Danish children showing that the presence of vowels at word boundaries impedes the recognition of familiar words and acquisition of novel words. By capitalizing on the high temporal resolution of the eye-tracking-basedLooking-While-Listening paradigm, we were able to show that: (1) vowel-initial words (e.g., Find aben!) can take up to a second longer to be identified in fluent speech than consonant-initial words (e.g., Find bamsen!); (2) consonant-initial nonsense words presented in vocoid-final carrier phrases (e.g., Her er syffen!) are retained less robustly after exposure than words presented in consonant-final carrier phrases (e.g., Find syffen!).