Bayesian Interaction Primitives

This project introduces a fully Bayesian reformulation of Interaction Primitives for human-robot interaction and collaboration. A key insight is that a subset of human-robot interaction is conceptually related to simultaneous localization and mapping techniques. Leveraging this insight we can significantly increase the accuracy of temporal estimation and inferred trajectories while simultaneously reducing the associated computational complexity. We show that this enables more complex human-robot interaction scenarios involving more degrees of freedom. Besides a brief overview of the features, you can also download the code on this page. For an in-depth discussion of BIP, please refer to the paper.

Temporal Robustness

A demonstration of the temporal robustness of BIP can be seen to the right. The green trajectory shows the ‘mean’ trajectory generated from the weights of the training demonstrations. A partially observed trajectory, shown in blue, is incrementally given to the trained BIP instance from which a response trajectory, shown in red, is generated. In order to simulate a human-robot interaction scenario, only one degree of freedom — the Y coordinate — is given to the BIP instance during inference while both the X and Y degrees of freedom are generated from the conditional distribution in response. In this particular instance, the partial trajectory is 20% slower than the training trajectories and this can be seen by the initial overlap in the observed trajectory and the generated response trajectory. However, SLAM localization soon corrects for this and the correct phase estimate is achieved by ~25% visibility. It should be noted that the gap between the observed trajectory and the generated trajectory at the tail end of the loop is due an accumulation of errors in the phase velocity, as it is no longer meaningfully updated since the observed Y degree of freedom is relatively static.

Spatial Robustness

A demonstration of the spatial robustness of BIP can be seen to the right. The green trajectory shows the ‘mean’ trajectory generated from the weights of the training demonstrations. A partially observed trajectory, shown in blue, is gradually distorted by noise while it can be seen that the generated response trajectory, shown in red, does not show significant errors as a result, thus exhibiting spatial robustness.

Code

You can download the code from the link below. The documentation can also be found the the related git repository.

Download: https://github.com/ir-lab/intprim

Authors: Joseph Campbell (jacampb1@asu.edu) and Simon Stepputtis (sstepput@asu.edu)

Cite: If you use this library, please use this citation:
    @InProceedings{campbell17a,
        title = {Bayesian Interaction Primitives: A SLAM Approach to Human-Robot Interaction},
        author = {Joseph Campbell and Heni Ben Amor},
        booktitle = {Proceedings of the 1st Annual Conference on Robot Learning},
        pages = {379–387},
        year = {2017},
        editor = {Sergey Levine and Vincent Vanhoucke and Ken Goldberg},
        volume = {78},
        series = {Proceedings of Machine Learning Research},
        address = {},
        month = {13–15 Nov},
        publisher = {PMLR},
        pdf = {http://proceedings.mlr.press/v78/campbell17a/campbell17a.pdf},
        url = {http://proceedings.mlr.press/v78/campbell17a.html}
    }