Programme for the 2021 GEHM plenary meeting
The GEHM network on Gesture and Head Movements in Language is organising a plenary meeting open to the project participants and others interested on 15-16 December 2021. The meeting will be held online using the Zoom platform. A link will be sent to those interested in participating.
Please, contact your group representative in the GEHM network, or network coordinator Patrizia Paggio, if you would like to attend.
Programme
15 December
13:00 - 13:15 | Getting ready and greetings |
13:15 - 14:15 |
First keynote: Judith Holler Coordinating Minds with the Body |
14:15 - 14:45 | Jens Edlund
Round-trip Latency and Interaction Chronograms |
14:45 - 15:15 | Break |
15:15 - 15:45 | Khurshid Ahmad and Carl Vogel Spontaneous Head Movements and Micro-expression Detection |
15:45 - 16:15 | Manex Aguirrezabal, Bart Jongejan, Costanza Navarretta and Patrizia Paggio
Detection of Head Movements in Video-recorded Dialogues: a Task for Machine Learning or Neural models? |
16:15 - 16:30 | Break |
16:30 - 17:30 | Second keynote: Louis-Philippe Morency Multimodal AI: Understanding Human Behaviours |
17:30 | Closing day 1 |
16 December
09:00 - 09:30 | Gilbert Ambrazaitis, Johan Frid and David House
Auditory vs. Audiovisual Perception of Prominence |
09:30 - 10:00 | Patrizia Paggio, Holger Mitterer and Alexandra Vella
Do Gestures Increase Prominence in Naturally Produced Utterances? |
10:00 - 10:30 | Break |
10:30 - 11:00 | Sandra Debreslioska and Marianne Gullberg
Information Status Affects Reference Tracking in Speech and Gesture |
11:00 - 11:30 | Patrick Rohrer and Pilar Prieto Vives
Using M3D to Investigate the Multimodal Marking of Information Status |
11:30 - 12:00 | Break |
12:00 - 12:30 | Clarissa de Vries
Exploring the Multimodal Manifestation of Ironic Stance in Interaction |
12:30 - 13:00 | Final discussion, administration and planning |
13:00 | Closing day 2 |
Judith Holler (first keynote)
Abstract
Traditionally, visual bodily movements have been associated with the communication of affect and emotion. In the past decades, studies have convincingly demonstrated that some of these movements carry semantic information and contribute to the communication of propositional information. In this talk, I will throw light on the pragmatic contribution that visual bodily movements make in conversation. In doing so, I will focus on fundamental processes that are key in achieving mutual understanding in talk: signalling communicative intent, producing recipient-designed messages, signalling understanding, trouble in understanding and repairing problems in understanding, and the communication of social actions (or “speech” acts). The bodily semiotic resources that speakers use in engaging in these pragmatic processes include a wide range of bodily articulators, but in my talk I will focus on representational gestures (iconics and points) and facial signals. Together, the results demonstrate that when we engage in conversational interaction our bodies act as core coordination devices.
Biography
Judith Holler is Associate Professor at the Donders Institute for Brain, Cognition Behaviour, Radboud University, and affiliated researcher at Max Planck Institute for Psycholinguistics. Her research program investigates human language in the very environment in which it has evolved, is acquired, and used most: face-to-face interaction, with a focus on the semantics and pragmatics of human communication from a multimodal perspective. She was recently awarded a European Research Council consolidator grant funding the current CoAct (Communication in Action) project, which investigates the multimodal architecture of speech acts and their cognitive processing. Her research group Communication in Social Interaction is based at the Max Planck Institute as well as the Donders Institute for Brain, Cognition & Behaviour. Together with Asli Ozyurek, she also coordinates the Nijmegen Gesture.
Louis-Philippe Morency (second keynote)
Abstract
Human face-to-face communication is a little like a dance, in that participants continuously adjust their behaviors based on verbal and nonverbal cues from the social context. Today's computers and interactive devices are still lacking many of these human-like abilities to hold fluid and natural interactions. Leveraging recent advances in machine learning, audio-visual signal processing and computational linguistic, my research focuses on creating computational technologies able to analyze, recognize and predict human subtle communicative behaviors in social context. Central to this research effort is the introduction of new probabilistic models able to learn the temporal and fine-grained latent dependencies across behaviors, modalities and interlocutors. In this talk, I will present some of our recent achievements in multimodal machine learning, addressing five core challenges: representation, alignment, fusion, translation and co-learning.
Speaker’s bio
<louis-philippe morency="" is="" associate="" professor="" in="" the="" language="" technology="" institute="" at="" carnegie="" mellon="" university="" where="" he="" leads="" multimodal="" communication="" and="" machine="" learning="" laboratory="" multicomp="" lab="" was="" formerly="" research="" faculty="" computer="" sciences="" department="" of="" southern="" california="" received="" his="" ph="" d="" degree="" from="" mit="" science="" artificial="" intelligence="" focuses="" on="" building="" computational="" foundations="" to="" enable="" computers="" with="" abilities="" analyze="" recognize="" predict="" subtle="" human="" communicative="" behaviors="" during="" social="" interactions="" diverse="" awards="" including="" ai="" s="" 10="" watch="" by="" ieee="" intelligent="" systems="" netexplo="" award="" partnership="" unesco="" best="" paper="" acm="" conferences="" covered="" media="" outlets="" such="" as="" wall="" street="" journal="" economist="" npr="" p="">