Sparse Latent Space Policy Search

         Kevin presented our new paper

                                     “Sparse Latent Space Policy Search”

                                          on the AAAI-16 conference.

Complementary Material

The members of the Interactive Robotics Lab at Arizona State University explore the intersection of robotics, artificial intelligence, and human-robot interaction. Our main research focus is on the development of machine learning methods that allow humanoid robots to behave in an intelligent and autonomous manner. These robot learning techniques span a wide variety of approaches, such as supervised learning, reinforcement learning, imitation learning, or interaction learning. They enable robots to gradually increase their repertoire of skills, e.g., grasping objects,  manipulation, or walking, without additional effort for a human programmer.  We also develop methods that address the question of when and how to engage in a  collaboration with a human partner. Our methods have been applied to a large number of different robots in the US, Europe, and Japan and have been shown to be  particularly successful in enabling intelligent automotive manufacturing. The research group has strong collaborations with leading robotics and machine learning research groups across the globe.

News

Intention projection featured at the Institution of Mechanical Engineers

View

The work of our students Ramsundar and Yash are featured at the Institution of Mechanical Engineers. Our “[…] lab has created an augmented-reality approach, where the robot uses a projector to highlight objects it’s about to reach for, or to illuminate the route it’s going to take. “The environment becomes a canvas for the robot to communicate its intent to the human partner” (Amor) […].” (Amit Katwala, imeche.org)

The full article can be found here

New article “ASU Robotics turns to nature for inspiration” about our C-TURTLE at 3TV / CBS 5

View


Video by azfamily.com ( KPHO Broadcasting Corporation )

“ASU Robotics students have developed a robot that mimics a sea turtle as part of a research project looking at ways to integrate computer science, biology and engineering. The team wanted to come up with the best solution on how to travel over sand. The students settled on a sea turtle as a great option.” (azfamily.com)

Full article here

Two new papers on our robot turtle accepted

View

We have two new papers accepted to RSS and Living Machines 2017.

The submission to RSS, From the Lab to the Desert: Fast Prototyping and Learning of Robot Locomotion, introduces a new methodology which combines quick prototyping and sample-efficient reinforcement learning in order to produce effective locomotion of a sea-turtle inspired robotic platform in a desert environment.

The submission to Living Machines, Bio-inspired Robot Design Considering Load-bearing and Kinematic Ontogeny of Chelonioidea Sea Turtles, explores the effect of biologically-inspired fins on the locomotion of our sea-turtle robot.

New Paper: Sample-Efficient Reinforcement Learning for Robot to Human Handover Tasks

View


We have a new paper accepted at The Multi-disciplinary Conference on Reinforcement Learning and Decision Making. Trevor will present our new work on learning to hand over objects at the UMICH Ann Arbor. Abstract:
While significant advancements have been made recently in the field of reinforcement learning, relatively little work has been devoted to reinforcement learning in a human context. Learning in the context of a human adds a variety of additional constraints that make the problem more difficult including an increased importance on sample efficiency and the inherent unpredictability of the human counterpart. In this work we used the Sparse Latent Space Policy Search algorithm and a linear-Gaussian trajectory approximator with the objective of learning optimized, understandable trajectories for object handovers between a robot and a human with very high sample-efficiency.

Graduation Spring 2017: Ramsundar Kalpagam Ganesan and Indranil Sur

View

Our graduate students Ramsundar Kalpagam Ganesan and Indranil Sur graduated with a masters degree in spring 2017. Ramsundar worked on “Mediating Human-Robot Collaboration through Mixed Reality Cues”, utilizing virtual reality to ease complex Human-Robot Interaction. After his graduation he will start at Delphi. Automotive working on autonomous driving. Indranil worked on “Robots that Anticipate Pain: Anticipating Physical Perturbations from Visual Cues through Deep Predictive Models”. His work focused on teaching robots how to anticipate pain to create safer Human-Robot Interaction. He will start soon at SRI in Princeton. We wish them the best for their future career.

A System for Learning Continuous Human-Robot Interactions from Human-Human Demonstrations

View

We present a data-driven imitation learning system for learning human-robot interactions from human-human demonstrations. During training, the movements of two interaction partners are recorded through motion capture and an interaction model is learned. At runtime, the interaction model is used to continuously adapt the robot’s motion, both spatially and temporally, to the movements of the human interaction partner. We show the effectiveness of the approach on complex, sequential tasks by presenting two applications involving collaborative human-robot assembly. Experiments with varied object hand-over positions and task execution speeds confirm the capabilities for spatio-temporal adaption of the demonstrated behavior to the current situation.

Preliminary Version (Full-text | ICRA 2017): PDF

IRL Student Receives NSF EAPSI Fellowship

View

Joe Campbell, Ph.D. student in the Interactive Robotics Lab received the prestigious NSF EAPSI Fellowship. The fellowship will enable Joe to travel to Japan. At the University of Osaka Joe will be working for 10 weeks with our research collaborators from the Hosoda Lab (Prof. Ikemoto and Prof. Hosoda lab). He will develop new Interaction Primitive approaches for human-robot interaction.

Our SunDevil-RX robot learns to play Basketball

View

Our robot Sun Devil-RX was featured in various news articles. We have recently shown how machine learning can be used by an autonomous robot to learn how to play basketball. Sun Devil repeatedly shot a ball to the hoop and learned how to refine his motor skills. You can see a video of the learning process in the articles.b

ASU Now: https://asunow.asu.edu/20161103-discoveries-asu-robot-teaches-itself-how-shoot-hoops-matter-hours

Recode: http://www.recode.net/2016/11/4/13524676/machine-learning-ai-basketball-robot-arizona-state-asu

Research featured in New Scientist

View

The research collaboration between David Vogt and Heni Ben Amor about Human-Robot-Collaboration got featured in the newest print issue of the New Scientist.

An excerpt from the article:

That could relieve some of the physical stress on workers, says team member Heni Ben Amor at Arizona State University in Tempe.

“Ideally, humans and robots together should be able to do something that, individually or separately, they wouldn’t have been able to do alone,” he says.

It appeared online under the title:

Robot learns to play with Lego by watching human teachers

NASA Space Grant

View

Congratulations to our junior member Ricky Johnson for receiving the NASA space grant! Ricky will be working on football-playing robots.

About the Lab

This is the website of the Interactive Robotics Laboratory (Ben Amor - Lab) at Arizona State University. We focus on developing novel machine learning techniques that allow robots to physically interact with objects and humans in their environment.