Sparse Latent Space Policy Search

         Kevin presented our new paper

                                     “Sparse Latent Space Policy Search”

                                          on the AAAI-16 conference.

Complementary Material

The members of the Interactive Robotics Lab at Arizona State University explore the intersection of robotics, artificial intelligence, and human-robot interaction. Our main research focus is on the development of machine learning methods that allow humanoid robots to behave in an intelligent and autonomous manner. These robot learning techniques span a wide variety of approaches, such as supervised learning, reinforcement learning, imitation learning, or interaction learning. They enable robots to gradually increase their repertoire of skills, e.g., grasping objects,  manipulation, or walking, without additional effort for a human programmer.  We also develop methods that address the question of when and how to engage in a  collaboration with a human partner. Our methods have been applied to a large number of different robots in the US, Europe, and Japan and have been shown to be  particularly successful in enabling intelligent automotive manufacturing. The research group has strong collaborations with leading robotics and machine learning research groups across the globe.

News

New Paper: Sample-Efficient Reinforcement Learning for Robot to Human Handover Tasks

 

View


We have a new paper accepted at The Multi-disciplinary Conference on Reinforcement Learning and Decision Making. Trevor will present our new work on learning to hand over objects at the UMICH Ann Arbor. Abstract:
While significant advancements have been made recently in the field of reinforcement learning, relatively little work has been devoted to reinforcement learning in a human context. Learning in the context of a human adds a variety of additional constraints that make the problem more difficult including an increased importance on sample efficiency and the inherent unpredictability of the human counterpart. In this work we used the Sparse Latent Space Policy Search algorithm and a linear-Gaussian trajectory approximator with the objective of learning optimized, understandable trajectories for object handovers between a robot and a human with very high sample-efficiency.

 

Graduation Spring 2017: Ramsundar Kalpagam Ganesan and Indranil Sur

 

View

Our graduate students Ramsundar Kalpagam Ganesan and Indranil Sur graduated with a masters degree in spring 2017. Ramsundar worked on “Mediating Human-Robot Collaboration through Mixed Reality Cues”, utilizing virtual reality to ease complex Human-Robot Interaction. After his graduation he will start at Delphi. Automotive working on autonomous driving. Indranil worked on “Robots that Anticipate Pain: Anticipating Physical Perturbations from Visual Cues through Deep Predictive Models”. His work focused on teaching robots how to anticipate pain to create safer Human-Robot Interaction. He will start soon at SRI in Princeton. We wish them the best for their future career.

 

A System for Learning Continuous Human-Robot Interactions from Human-Human Demonstrations

View

We present a data-driven imitation learning system for learning human-robot interactions from human-human demonstrations. During training, the movements of two interaction partners are recorded through motion capture and an interaction model is learned. At runtime, the interaction model is used to continuously adapt the robot’s motion, both spatially and temporally, to the movements of the human interaction partner. We show the effectiveness of the approach on complex, sequential tasks by presenting two applications involving collaborative human-robot assembly. Experiments with varied object hand-over positions and task execution speeds confirm the capabilities for spatio-temporal adaption of the demonstrated behavior to the current situation.

Preliminary Version (Full-text | ICRA 2017): PDF

IRL Student Receives NSF EAPSI Fellowship

View

Joe Campbell, Ph.D. student in the Interactive Robotics Lab received the prestigious NSF EAPSI Fellowship. The fellowship will enable Joe to travel to Japan. At the University of Osaka Joe will be working for 10 weeks with our research collaborators from the Hosoda Lab (Prof. Ikemoto and Prof. Hosoda lab). He will develop new Interaction Primitive approaches for human-robot interaction.

Our SunDevil-RX robot learns to play Basketball

View

Our robot Sun Devil-RX was featured in various news articles. We have recently shown how machine learning can be used by an autonomous robot to learn how to play basketball. Sun Devil repeatedly shot a ball to the hoop and learned how to refine his motor skills. You can see a video of the learning process in the articles.b

ASU Now: https://asunow.asu.edu/20161103-discoveries-asu-robot-teaches-itself-how-shoot-hoops-matter-hours

Recode: http://www.recode.net/2016/11/4/13524676/machine-learning-ai-basketball-robot-arizona-state-asu

Research featured in New Scientist

View

The research collaboration between David Vogt and Heni Ben Amor about Human-Robot-Collaboration got featured in the newest print issue of the New Scientist.

An excerpt from the article:

That could relieve some of the physical stress on workers, says team member Heni Ben Amor at Arizona State University in Tempe.

“Ideally, humans and robots together should be able to do something that, individually or separately, they wouldn’t have been able to do alone,” he says.

It appeared online under the title:

Robot learns to play with Lego by watching human teachers

NASA Space Grant

View

Congratulations to our junior member Ricky Johnson for receiving the NASA space grant! Ricky will be working on football-playing robots.

GPUs for Deep Learning and Embedded Technologies Workshop @ ASU

View

We are organizing a joint workshop on “GPUs for Deep Learning and Embedded Technologies” with Nvidia! GPUs, Deep learning and Embedded Systems are a rapidly growing segment of artificial intelligence. They are increasingly used to deliver near-human level accuracy for image classification, voice recognition, natural language processing, sentiment analysis, recommendation engines, and more. Applications areas include facial recognition, scene detection, advanced medical and pharmaceutical research, and autonomous, self-driving vehicles.

Our workshop will introduce students into Deep Learning on GPU’s, more details here: http://interactive-robotics.engineering.asu.edu/nvidia-asu/

First release of the GrouPS MATLAB Code

View

We are releasing a MATLAB implementation of the GrouPS algorithm for sparse latent space policy search. GrouPS combines reinforcement learning and dimensionality reduction, while also including prior structural knowledge about the task. The algorithm exploits these properties in order to (1) perform efficient policy search, (2) infer the low-dimensional latent space of the task, and (3) incorporate prior structural information. Prior knowledge about locality of synergies can be included by specifying distinct groups of correlated sub-components.  The provided code includes examples for performing policy search using the V-REP robotics simulator.

Poster Presentation on AAAI 2016

View

aaai16logoWe will present our new paper “Sparse Latent Space Policy Search” in the “Reinforcement Learning I”-Session on Thursday afternoon, 16. February, at the AAAI Conference on Artificial Intelligence in Phoenix. Also, we will be present with a poster in the evening session right after the presentations from 6.30 PM to 8.30 PM. You can find the complementary material here on our website:

http://interactive-robotics.engineering.asu.edu/project/sparse-latent-space-policy-search/

About the Lab

This is the website of the Interactive Robotics Laboratory (Ben Amor - Lab) at Arizona State University. We focus on developing novel machine learning techniques that allow robots to physically interact with objects and humans in their environment.