Classifying head movements in video-recorded conversations based on movement velocity, acceleration and jerk
Research output: Contribution to journal › Conference article › Research › peer-review
Standard
Classifying head movements in video-recorded conversations based on movement velocity, acceleration and jerk. / Jongejan, Bart; Paggio, Patrizia; Navarretta, Costanza.
In: Linköping Electronic Conference Proceedings, No. 141, 003, 2017, p. 10-17.Research output: Contribution to journal › Conference article › Research › peer-review
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - GEN
T1 - Classifying head movements in video-recorded conversations based on movement velocity, acceleration and jerk
AU - Jongejan, Bart
AU - Paggio, Patrizia
AU - Navarretta, Costanza
N1 - Conference code: 4th, 7th
PY - 2017
Y1 - 2017
N2 - This paper is about the automatic annotation of head movements in videos of face-to-face conversations. Manual annotation of gestures is resource consuming, and modelling gesture behaviours in different types of communicative settings requires many types of annotated data. Therefore, developing methods for automatic annotation is crucial. We present an approach where an SVM classifier learns to classify head movements based on measurements of velocity, acceleration, and the third derivative of position with respect to time, jerk. Consequently, annotations of head movements are added to new video data. The results of the automatic annotation are evaluated against manual annotations in the same data and show an accuracy of 73.47% with respect to these. The results also show that using jerk improves accuracy.
AB - This paper is about the automatic annotation of head movements in videos of face-to-face conversations. Manual annotation of gestures is resource consuming, and modelling gesture behaviours in different types of communicative settings requires many types of annotated data. Therefore, developing methods for automatic annotation is crucial. We present an approach where an SVM classifier learns to classify head movements based on measurements of velocity, acceleration, and the third derivative of position with respect to time, jerk. Consequently, annotations of head movements are added to new video data. The results of the automatic annotation are evaluated against manual annotations in the same data and show an accuracy of 73.47% with respect to these. The results also show that using jerk improves accuracy.
M3 - Conference article
SP - 10
EP - 17
JO - Linköping Electronic Conference Proceedings
JF - Linköping Electronic Conference Proceedings
SN - 1650-3740
IS - 141
M1 - 003
T2 - Nordic and European Symposium on Multimodal Communication
Y2 - 29 September 2016 through 30 September 2016
ER -
ID: 183642602