Update: The related paper has been successfully published in Journal of Virtual Reality and Broadcastings (link).
We are developing a new method to enable humanoid robots to react to a human interaction partner in a similar manner as demonstrated in prior interactions between two humans. In the training phase, motion capture data of two human interaction partners are used to build a sequence of so-called interaction meshes. This sequence of interaction meshes captures the spatial relationships between the interaction partners’ extremities (hands, feet and head) in the task-space over the course of the demonstrated interaction. During the live human-robot interaction, a second interaction mesh is created. Given the human’s and the robot’s body sizes as well as the human’s current posture, this interaction mesh is deformed so as to maximize its similarity with the training-phase interaction mesh of the correspond- ing frame. From the deformed interaction mesh, the robot’s posture can be easily generated. The problem of maximizing the similarity of interaction meshes is modeled as the problem of minimizing the deformation energy between the two meshes.
Due to the differing skeleton structures, human motion cannot be transferred directly to the humanoid robot. We use an inverse kinematics solver to apply human postures from the interaction mesh to the robot. The positions of robot’s hands and feet (marked yellow) are constrained to the positions of the human’s hands and feet
An interaction mesh is used to control a humanoid robot to react to ongoing user motions. Punches at heights not demonstrated in the initial recording can still be countered with a suitable defend motion.