Classifying head movements in video-recorded conversations based on movement velocity, acceleration and jerk

Publikation: Bidrag til tidsskriftKonferenceartikelfagfællebedømt

Standard

Classifying head movements in video-recorded conversations based on movement velocity, acceleration and jerk. / Jongejan, Bart; Paggio, Patrizia; Navarretta, Costanza.

I: Linköping Electronic Conference Proceedings, Nr. 141, 003, 2017, s. 10-17.

Publikation: Bidrag til tidsskriftKonferenceartikelfagfællebedømt

Harvard

Jongejan, B, Paggio, P & Navarretta, C 2017, 'Classifying head movements in video-recorded conversations based on movement velocity, acceleration and jerk', Linköping Electronic Conference Proceedings, nr. 141, 003, s. 10-17. <http://www.ep.liu.se/ecp/article.asp?issue=141&article=003&volume=>

APA

Jongejan, B., Paggio, P., & Navarretta, C. (2017). Classifying head movements in video-recorded conversations based on movement velocity, acceleration and jerk. Linköping Electronic Conference Proceedings, (141), 10-17. [003]. http://www.ep.liu.se/ecp/article.asp?issue=141&article=003&volume=

Vancouver

Jongejan B, Paggio P, Navarretta C. Classifying head movements in video-recorded conversations based on movement velocity, acceleration and jerk. Linköping Electronic Conference Proceedings. 2017;(141):10-17. 003.

Author

Jongejan, Bart ; Paggio, Patrizia ; Navarretta, Costanza. / Classifying head movements in video-recorded conversations based on movement velocity, acceleration and jerk. I: Linköping Electronic Conference Proceedings. 2017 ; Nr. 141. s. 10-17.

Bibtex

@inproceedings{95b4568c4af54849b787a694047842f3,
title = "Classifying head movements in video-recorded conversations based on movement velocity, acceleration and jerk",
abstract = "This paper is about the automatic annotation of head movements in videos of face-to-face conversations. Manual annotation of gestures is resource consuming, and modelling gesture behaviours in different types of communicative settings requires many types of annotated data. Therefore, developing methods for automatic annotation is crucial. We present an approach where an SVM classifier learns to classify head movements based on measurements of velocity, acceleration, and the third derivative of position with respect to time, jerk. Consequently, annotations of head movements are added to new video data. The results of the automatic annotation are evaluated against manual annotations in the same data and show an accuracy of 73.47% with respect to these. The results also show that using jerk improves accuracy.",
author = "Bart Jongejan and Patrizia Paggio and Costanza Navarretta",
year = "2017",
language = "English",
pages = "10--17",
journal = "Link{\"o}ping Electronic Conference Proceedings",
issn = "1650-3740",
number = "141",
note = "Nordic and European Symposium on Multimodal Communication : 7th Nordic and 4th European Symposium on Multimodal Communication , MMSYM2017 ; Conference date: 29-09-2016 Through 30-09-2016",
url = "http://mmsym.org/?page_id=412",

}

RIS

TY - GEN

T1 - Classifying head movements in video-recorded conversations based on movement velocity, acceleration and jerk

AU - Jongejan, Bart

AU - Paggio, Patrizia

AU - Navarretta, Costanza

N1 - Conference code: 4th, 7th

PY - 2017

Y1 - 2017

N2 - This paper is about the automatic annotation of head movements in videos of face-to-face conversations. Manual annotation of gestures is resource consuming, and modelling gesture behaviours in different types of communicative settings requires many types of annotated data. Therefore, developing methods for automatic annotation is crucial. We present an approach where an SVM classifier learns to classify head movements based on measurements of velocity, acceleration, and the third derivative of position with respect to time, jerk. Consequently, annotations of head movements are added to new video data. The results of the automatic annotation are evaluated against manual annotations in the same data and show an accuracy of 73.47% with respect to these. The results also show that using jerk improves accuracy.

AB - This paper is about the automatic annotation of head movements in videos of face-to-face conversations. Manual annotation of gestures is resource consuming, and modelling gesture behaviours in different types of communicative settings requires many types of annotated data. Therefore, developing methods for automatic annotation is crucial. We present an approach where an SVM classifier learns to classify head movements based on measurements of velocity, acceleration, and the third derivative of position with respect to time, jerk. Consequently, annotations of head movements are added to new video data. The results of the automatic annotation are evaluated against manual annotations in the same data and show an accuracy of 73.47% with respect to these. The results also show that using jerk improves accuracy.

M3 - Conference article

SP - 10

EP - 17

JO - Linköping Electronic Conference Proceedings

JF - Linköping Electronic Conference Proceedings

SN - 1650-3740

IS - 141

M1 - 003

T2 - Nordic and European Symposium on Multimodal Communication

Y2 - 29 September 2016 through 30 September 2016

ER -

ID: 183642602