Predicting an Individual’s Gestures from the Interlocutor’s Co-occurring Gestures and Related Speech
Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review
Standard
Predicting an Individual’s Gestures from the Interlocutor’s Co-occurring Gestures and Related Speech. / Navarretta, Costanza.
Proceedings of the IEEE 7th International Conference on Cognitive Infocommunications. IEEE Signal Processing Society, 2016. p. 233-237.Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - GEN
T1 - Predicting an Individual’s Gestures from the Interlocutor’s Co-occurring Gestures and Related Speech
AU - Navarretta, Costanza
PY - 2016
Y1 - 2016
N2 - Overlapping speech and gestures are common inface-to-face conversations and have been interpreted as a signof synchronization between conversation participants. A numberof gestures are even mirrored or mimicked. Therefore, wehypothesize that the gestures of a subject can contribute to theprediction of gestures of the same type of the other subject.In this work, we also want to determine whether the speechsegments to which these gestures are related to contribute tothe prediction. The results of our pilot experiments show that aNaive Bayes classifier trained on the duration and shape featuresof head movements and facial expressions contributes to theidentification of the presence and shape of head movementsand facial expressions respectively. Speech only contributes toprediction in the case of facial expressions. The obtained resultsshow that the gestures of the interlocutors are one of thenumerous factors to be accounted for when modeling gestureproduction in conversational interactions and this is relevant tothe development of socio-cognitive ICT.
AB - Overlapping speech and gestures are common inface-to-face conversations and have been interpreted as a signof synchronization between conversation participants. A numberof gestures are even mirrored or mimicked. Therefore, wehypothesize that the gestures of a subject can contribute to theprediction of gestures of the same type of the other subject.In this work, we also want to determine whether the speechsegments to which these gestures are related to contribute tothe prediction. The results of our pilot experiments show that aNaive Bayes classifier trained on the duration and shape featuresof head movements and facial expressions contributes to theidentification of the presence and shape of head movementsand facial expressions respectively. Speech only contributes toprediction in the case of facial expressions. The obtained resultsshow that the gestures of the interlocutors are one of thenumerous factors to be accounted for when modeling gestureproduction in conversational interactions and this is relevant tothe development of socio-cognitive ICT.
M3 - Article in proceedings
SN - 978-1-5090-2644-9
SP - 233
EP - 237
BT - Proceedings of the IEEE 7th International Conference on Cognitive Infocommunications
PB - IEEE Signal Processing Society
ER -
ID: 171660512