The members of the Interactive Robotics Lab at Arizona State University explore the intersection of robotics, artificial intelligence, and human-robot interaction. Our main research focus is on the development of machine learning methods that allow humanoid robots to behave in an intelligent and autonomous manner. These robot learning techniques span a wide variety of approaches, such as supervised learning, reinforcement learning, imitation learning, or interaction learning. They enable robots to gradually increase their repertoire of skills, e.g., grasping objects, manipulation, or walking, without additional effort for a human programmer. We also develop methods that address the question of when and how to engage in a collaboration with a human partner. Our methods have been applied to a large number of different robots in the US, Europe, and Japan and have been shown to be particularly successful in enabling intelligent automotive manufacturing. The research group has strong collaborations with leading robotics and machine learning research groups across the globe.
The submission to RSS, From the Lab to the Desert: Fast Prototyping and Learning of Robot Locomotion, introduces a new methodology which combines quick prototyping and sample-efficient reinforcement learning in order to produce effective locomotion of a sea-turtle inspired robotic platform in a desert environment.
The submission to Living Machines, Bio-inspired Robot Design Considering Load-bearing and Kinematic Ontogeny of Chelonioidea Sea Turtles, explores the effect of biologically-inspired fins on the locomotion of our sea-turtle robot.
We have a new paper accepted at The Multi-disciplinary Conference on Reinforcement Learning and Decision Making. Trevor will present our new work on learning to hand over objects at the UMICH Ann Arbor. Abstract:
While significant advancements have been made recently in the field of reinforcement learning, relatively little work has been devoted to reinforcement learning in a human context. Learning in the context of a human adds a variety of additional constraints that make the problem more difficult including an increased importance on sample efficiency and the inherent unpredictability of the human counterpart. In this work we used the Sparse Latent Space Policy Search algorithm and a linear-Gaussian trajectory approximator with the objective of learning optimized, understandable trajectories for object handovers between a robot and a human with very high sample-efficiency.
Our graduate students Ramsundar Kalpagam Ganesan and Indranil Sur graduated with a masters degree in spring 2017. Ramsundar worked on “Mediating Human-Robot Collaboration through Mixed Reality Cues”, utilizing virtual reality to ease complex Human-Robot Interaction. After his graduation he will start at Delphi. Automotive working on autonomous driving. Indranil worked on “Robots that Anticipate Pain: Anticipating Physical Perturbations from Visual Cues through Deep Predictive Models”. His work focused on teaching robots how to anticipate pain to create safer Human-Robot Interaction. He will start soon at SRI in Princeton. We wish them the best for their future career.
We present a data-driven imitation learning system for learning human-robot interactions from human-human demonstrations. During training, the movements of two interaction partners are recorded through motion capture and an interaction model is learned. At runtime, the interaction model is used to continuously adapt the robot’s motion, both spatially and temporally, to the movements of the human interaction partner. We show the effectiveness of the approach on complex, sequential tasks by presenting two applications involving collaborative human-robot assembly. Experiments with varied object hand-over positions and task execution speeds confirm the capabilities for spatio-temporal adaption of the demonstrated behavior to the current situation.
Preliminary Version (Full-text | ICRA 2017): PDF
Joe Campbell, Ph.D. student in the Interactive Robotics Lab received the prestigious NSF EAPSI Fellowship. The fellowship will enable Joe to travel to Japan. At the University of Osaka Joe will be working for 10 weeks with our research collaborators from the Hosoda Lab (Prof. Ikemoto and Prof. Hosoda lab). He will develop new Interaction Primitive approaches for human-robot interaction.
Our robot Sun Devil-RX was featured in various news articles. We have recently shown how machine learning can be used by an autonomous robot to learn how to play basketball. Sun Devil repeatedly shot a ball to the hoop and learned how to refine his motor skills. You can see a video of the learning process in the articles.
The research collaboration between David Vogt and Heni Ben Amor about Human-Robot-Collaboration got featured in the newest print issue of the New Scientist.
An excerpt from the article:
That could relieve some of the physical stress on workers, says team member Heni Ben Amor at Arizona State University in Tempe.
“Ideally, humans and robots together should be able to do something that, individually or separately, they wouldn’t have been able to do alone,” he says.
It appeared online under the title:
Congratulations to our junior member Ricky Johnson for receiving the NASA space grant! Ricky will be working on football-playing robots.
We are organizing a joint workshop on “GPUs for Deep Learning and Embedded Technologies” with Nvidia! GPUs, Deep learning and Embedded Systems are a rapidly growing segment of artificial intelligence. They are increasingly used to deliver near-human level accuracy for image classification, voice recognition, natural language processing, sentiment analysis, recommendation engines, and more. Applications areas include facial recognition, scene detection, advanced medical and pharmaceutical research, and autonomous, self-driving vehicles.
Our workshop will introduce students into Deep Learning on GPU’s, more details here: http://interactive-robotics.engineering.asu.edu/nvidia-asu/
We are releasing a MATLAB implementation of the GrouPS algorithm for sparse latent space policy search. GrouPS combines reinforcement learning and dimensionality reduction, while also including prior structural knowledge about the task. The algorithm exploits these properties in order to (1) perform efficient policy search, (2) infer the low-dimensional latent space of the task, and (3) incorporate prior structural information. Prior knowledge about locality of synergies can be included by specifying distinct groups of correlated sub-components. The provided code includes examples for performing policy search using the V-REP robotics simulator.