Detecting head movements in video-recorded dyadic conversations

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Standard

Detecting head movements in video-recorded dyadic conversations. / Paggio, Patrizia; Jongejan, Bart; Agirrezabal, Manex; Navarretta, Costanza.

Proceedings of the International Conference on Multimodal Interaction: Adjunct. New York : Association for Computing Machinery, 2018. s. 1-6.

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Harvard

Paggio, P, Jongejan, B, Agirrezabal, M & Navarretta, C 2018, Detecting head movements in video-recorded dyadic conversations. i Proceedings of the International Conference on Multimodal Interaction: Adjunct. Association for Computing Machinery, New York, s. 1-6. https://doi.org/10.1145/3281151.3281152

APA

Paggio, P., Jongejan, B., Agirrezabal, M., & Navarretta, C. (2018). Detecting head movements in video-recorded dyadic conversations. I Proceedings of the International Conference on Multimodal Interaction: Adjunct (s. 1-6). New York: Association for Computing Machinery. https://doi.org/10.1145/3281151.3281152

Vancouver

Paggio P, Jongejan B, Agirrezabal M, Navarretta C. Detecting head movements in video-recorded dyadic conversations. I Proceedings of the International Conference on Multimodal Interaction: Adjunct. New York: Association for Computing Machinery. 2018. s. 1-6 https://doi.org/10.1145/3281151.3281152

Author

Paggio, Patrizia ; Jongejan, Bart ; Agirrezabal, Manex ; Navarretta, Costanza. / Detecting head movements in video-recorded dyadic conversations. Proceedings of the International Conference on Multimodal Interaction: Adjunct. New York : Association for Computing Machinery, 2018. s. 1-6

Bibtex

@inproceedings{f810685cd29b4210895ccb8b4b4e2605,
title = "Detecting head movements in video-recorded dyadic conversations",
abstract = "This paper is about the automatic recognition of head movements in videos of face-to-face dyadic conversations. We present an approach where recognition of head movements is casted as a multimodal frame classification problem based on visual and acoustic features. The visual features include velocity, acceleration, and jerk values associated with head movements, while the acoustic ones are pitch and intensity measurements from the co-occuring speech. We present the results obtained by training and testing a number of classifiers on manually annotated data from two conversations. The best performing classifier, a Multilayer Perceptron trained using all the features, obtains 0.75 accuracy and outperforms the mono-modal baseline classifier.",
author = "Patrizia Paggio and Bart Jongejan and Manex Agirrezabal and Costanza Navarretta",
year = "2018",
doi = "10.1145/3281151.3281152",
language = "English",
isbn = "978-1-4503-6002-9",
pages = "1--6",
booktitle = "Proceedings of the International Conference on Multimodal Interaction: Adjunct",
publisher = "Association for Computing Machinery",

}

RIS

TY - GEN

T1 - Detecting head movements in video-recorded dyadic conversations

AU - Paggio, Patrizia

AU - Jongejan, Bart

AU - Agirrezabal, Manex

AU - Navarretta, Costanza

PY - 2018

Y1 - 2018

N2 - This paper is about the automatic recognition of head movements in videos of face-to-face dyadic conversations. We present an approach where recognition of head movements is casted as a multimodal frame classification problem based on visual and acoustic features. The visual features include velocity, acceleration, and jerk values associated with head movements, while the acoustic ones are pitch and intensity measurements from the co-occuring speech. We present the results obtained by training and testing a number of classifiers on manually annotated data from two conversations. The best performing classifier, a Multilayer Perceptron trained using all the features, obtains 0.75 accuracy and outperforms the mono-modal baseline classifier.

AB - This paper is about the automatic recognition of head movements in videos of face-to-face dyadic conversations. We present an approach where recognition of head movements is casted as a multimodal frame classification problem based on visual and acoustic features. The visual features include velocity, acceleration, and jerk values associated with head movements, while the acoustic ones are pitch and intensity measurements from the co-occuring speech. We present the results obtained by training and testing a number of classifiers on manually annotated data from two conversations. The best performing classifier, a Multilayer Perceptron trained using all the features, obtains 0.75 accuracy and outperforms the mono-modal baseline classifier.

UR - https://dl.acm.org/citation.cfm?doid=3281151.3281152

U2 - 10.1145/3281151.3281152

DO - 10.1145/3281151.3281152

M3 - Article in proceedings

SN - 978-1-4503-6002-9

SP - 1

EP - 6

BT - Proceedings of the International Conference on Multimodal Interaction: Adjunct

PB - Association for Computing Machinery

CY - New York

ER -

ID: 209096029