• Inverse Kinematics Solver and Motion Retargeting with Blender

    Posted on June 14, 2012 by vogt3 in Software.

    Motion capture systems have often been used to record the training data for performing robot imitation learning. Motion capture produces highly informative data sets describing the configurations of the human body during the recorded motion. Unfortunately, this accuracy in the information comes at the price of high costs and reduced mobility. However, in recent years the consumer market proposed various alternatives, e.g. the Microsoft Kinect camera, which provide motion capture capabilities at significantly lower costs and without restrictions w.r.t the required space. Yet, both the accuracy and the number of degrees of freedom are limited in low-cost systems. With the Visual Bootstrapping method, we aim to compensate for the inaccuracies of low-cost motion capture system such as Microsoft Kinect camera through post-processing and machine learning algorithms.

    First steps of the Visual Bootstrapping method are the motion capture process, e.g. using the Kinect, and the mapping of recorded human motion data to a virtual humanoid robot. It is generally known that human motion data cannot be transferred to a humanoid robot without further adaptation. This problem is due to the different skeleton structure, variances in limb sizes, joint angle limits etc. and is generally referred to as the correspondence problem.

     

    Left: A motion is recorded with a markerless tracking system. Middle: The extremities (hand, feet, head) of human actor are associated with markers in a skeleton representation. Right: These markers are used as targets for the robot inverse kinematics solver resulting in a mapped motion.

    We used a mapping strategy that aims to maintain a correspondence between the extremities (hands, feet, head) of the human actor and the robot. More concretely, the spatial positions of the human’s extremities are mapped onto a scaled, approximately human-sized virtual robot. Through an inverse kinematics solver that is able to handle multiple kinematic chains, namely iTaSC (instantaneous task specification and control), the joint angle values of the virtual robot are calculated. The virtual robot’s joint angles are restricted by the same constraints as the real robot. The figure gives an overview over this motion mapping process. Generally, however, the Visual Bootstrapping method does not assume or depend on a particular mapping strategy.

    The related blend-file containing the inverse kinematics chain and the Nao model can be obtained by executing the following commands:

    hg clone https://bitbucket.org/JuvenileFlippancy/naoblender

    Please make sure you cite our related papers, when using our code in your project!

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>