Towards a Methodology Supporting Semiautomatic Annotation of Head Movements in Video-recorded Conversations
Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review
We present a method to support the annotation of head movements in video-recorded conversations. Head movement segments from annotated multimodal data are used to train a model to detect head movements in unseen data. The resulting predicted movement sequences are uploaded to the ANVIL tool for post-annotation editing. The automatically identified head movements and the original annotations are compared to assess the overlap between the two. This analysis showed that movement onsets were more easily detected than offsets, and pointed at a number of patterns in the mismatches between original annotations and model predictions that could be dealt with in general terms in post-annotation guidelines.
Original language | English |
---|---|
Title of host publication | Proceedings of The Joint 15th Linguistic Annotation Workshop (LAW) and 3rd Designing Meaning Representations (DMR) Workshop |
Publisher | Association for Computational Linguistics |
Publication date | 2021 |
Pages | 151-159 |
Publication status | Published - 2021 |
Links
- https://aclanthology.org/2021.law-1.16.pdf
Final published version
ID: 284176309