Multimodal Detection and Classification of Head Movements in Face-to-Face Conversations: Exploring Models, Features and Their Interaction

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Dokumenter

  • Fulltext

    Forlagets udgivne version, 521 KB, PDF-dokument

In this work we perform multimodal detection and classification
of head movements from face to face video conversation data.
We have experimented with different models and feature sets
and provided some insight on the effect of independent features,
but also how their interaction can enhance a head movement
classifier. Used features include nose, neck and mid hip position
coordinates and their derivatives together with acoustic features,
namely, intensity and pitch of the speaker on focus. Results
show that when input features are sufficiently processed by in-
teracting with each other, a linear classifier can reach a similar
performance to a more complex non-linear neural model with
several hidden layers. Our best models achieve state-of-the-art
performance in the detection task, measured by macro-averaged
F1 score.
OriginalsprogEngelsk
TitelGesture and Speech in Interaction (GESPIN 2023)
UdgivelsesstedNijmegen
ForlagMax Planck Institut for Psycholinguistics
Publikationsdato2023
StatusUdgivet - 2023

ID: 374969032