The Multimodal Human-Human-Robot-Interactions (MHHRI) dataset was collected with the aim of studying personality simultaneously in human-human interactions (HHI) and human-robot interactions (HRI) and its relationship with engagement. Multimodal data was collected during a controlled interaction study where dyadic interactions between two human participants and triadic interactions between two human participants and a robot took place with interactants asking a set of personal questions to each other. Interactions were recorded using two static and two dynamic cameras as well as two biosensors, and meta-data was collected by having participants fill in two types of questionnaires, for assessing their own personality traits and their perceived engagement with their partners (self labels) and for assessing personality traits of the other participants partaking in the study (acquaintance labels).
The authors segmented each recording into short clips using one question and answer window. In the HHI task, each clip contains participants asking a question to their interaction partners. Similarly, in the HRI task, each clip comprises the robot asking a question to one of the participants and the target participant responding accordingly. This yielded 290 clips of HHI, 456 clips of HRI, and 746 clips in total for each data modality. However, Q sensor did not work during one of the sessions, resulting in 276 physiological clips of HHI. Each clip has a duration ranging from 20 to 120 seconds, resulting in a total of 4 hours 15 minutes of fully synchronised multimodal recordings.
@article{8003432,
author = {Celiktutan, Oya and Skordos, Efstratios and Gunes, Hatice},
doi = {10.1109/TAFFC.2017.2737019},
journal = {IEEE Transactions on Affective Computing},
number = {4},
pages = {484-497},
title = {Multimodal Human-Human-Robot Interactions (MHHRI) Dataset for Studying Personality and Engagement},
volume = {10},
year = {2019}
}