2022
“Origami-Inspired Wearable Robot for Trunk Support.” Dongting Li, Emiliano Quinones Yumbla, Alyssa Olivas, Thomas Sugar, Heni Ben Amor, Hyunglae Lee, Wenlong Zhang, Daniel M Aukes. IEEE/ASME Transactions on Mechatronics, 2022. Paper
“A System for Imitation Learning of Contact-Rich Bimanual Manipulation Policies.” Simon Stepputtis, Maryam Bandari, Stefan Schaal, Heni Ben Amor. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022. Paper
“Learning Ergonomic Control in Human–Robot Symbiotic Walking.” Geoffrey Clark, Heni Ben Amor. IEEE Transactions on Robotics, 2022. Paper
“Introduction to the Special Issue on Test Methods for Human-Robot Teaming Performance Evaluations.” Jeremy A. Marvel, Shelly Bagchi, Megan Zimmerman, Murat Aksu, Brian Antonishek, Yue Wang, Ross Mead, Terry Fong, Heni Ben Amor. ACM Transactions on Human-Robot Interaction (THRI), 2022. Paper
2021
“Local Repair of Neural Networks Using Optimization.” Keyvan Majd, Siyu Zhou, Heni Ben Amor, Georgios Fainekos, Sriram Sankaranarayanan. Paper
2020
Language-Conditioned Imitation Learning for Robot Manipulation Tasks
Simon Stepputtis, Joseph Campbell, Mariano Phielipp, Stefan Lee, Chitta Baral, Heni Ben Amor
NeurIPS 2020
Paper Code
View Abstract
Imitation learning is a popular approach for teaching motor skills to robots. However, most approaches focus on extracting policy parameters from execution traces alone (i.e., motion trajectories and perceptual data). No adequate communication channel exists between the human expert and the robot to describe critical aspects of the task, such as the properties of the target object or the intended shape of the motion. Motivated by insights into the human teaching process, we introduce a method for incorporating unstructured natural language into imitation learning. At training time, the expert can provide demonstrations along with verbal descriptions in order to describe the underlying intent (e.g., “go to the large green bowl”). The training process then interrelates these two modalities to encode the correlations between language, perception, and motion. The resulting language-conditioned visuomotor policies can be conditioned at runtime on new human commands and instructions, which allows for more fine-grained control over the trained policies while also reducing situational ambiguity. We demonstrate in a set of simulation experiments how our approach can learn language-conditioned manipulation policies for a seven-degree-of-freedom robot arm and compare the results to a variety of alternative methods.
Predictive Modeling of Periodic Behavior for Human-Robot Symbiotic Walking
Geoffrey Clark, Joseph Campbell, Seyed Mostafa Rezayat Sorkhabadi, Wenlong Zhang, Heni Ben Amor
ICRA 2020
Paper
View Abstract
We propose in this paper Periodic Interaction Primitives – a probabilistic framework that can be used to learn compact models of periodic behavior. Our approach extends existing formulations of Interaction Primitives to periodic movement regimes, i.e., walking. We show that this model is particularly well-suited for learning data-driven, customized models of human walking, which can then be used for generating predictions over future states or for inferring latent, biomechanical variables. We also demonstrate how the same framework can be used to learn controllers for a robotic prosthesis using an imitation learning approach. Results in experiments with human participants indicate that Periodic Interaction Primitives efficiently generate predictions and ankle angle control signals for a robotic prosthetic ankle, with MAE of 2.21 degrees in 0.0008s per inference. Performance degrades gracefully in the presence of noise or sensor fall outs. Compared to alternatives, this algorithm functions 20 times faster and performed 4.5 times more accurately on test subjects.
2019
Data-efficient Co-Adaptation of Morphology and Behaviour with Deep Reinforcement Learning
Kevin Sebastian Luck, Heni Ben Amor, Roberto Calandra
Conference on Robot Learning 2019
Paper
View Abstract
Humans and animals are capable of quickly learning new behaviours to solve new tasks. Yet, we often forget that they also rely on a highly specialized morphology that co-adapted with motor control throughout thousands of years. Although compelling, the idea of co-adapting morphology and behaviours in robots is often unfeasible because of the long manufacturing times, and the need to re-design an appropriate controller for each morphology. In this paper, we propose a novel approach to automatically and efficiently co-adapt a robot morphology and its controller. Our approach is based on recent advances in deep reinforcement learning, and specifically the soft actor critic algorithm. Key to our approach is the possibility of leveraging previously tested morphologies and behaviors to estimate the performance of new candidate morphologies. As such, we can make full use of the information available for making more informed decisions, with the ultimate goal of achieving a more data-efficient co-adaptation (i.e., reducing the number of morphologies and behaviors tested). Simulated experiments show that our approach requires drastically less design prototypes to find good morphology-behaviour combinations, making this method particularly suitable for future co-adaptation of robot designs in the real world.
Toward Generalized Change Detection on Planetary Surfaces With Convolutional Autoencoders and Transfer Learning
Hannah Rae Kerner, Kiri L. Wagstaff, Brian D. Bue, Patrick C. Gray, James F. Bell III, and Heni Ben Amor
IEEE Journal of selected topics in applied earth observations and remote sensing
Paper
View Abstract
Abstract—Ongoing planetary exploration missions are returning large volumes of image data. Identifying surface changes in these images, e.g., new impact craters, is critical for investigating many scientific hypotheses. Traditional approaches to change detection rely on image differencing and manual feature engineering. These methods can be sensitive to irrelevant variations in illumination or image quality and typically require before and after images to be coregistered, which itself is a major challenge. Additionally, most prior change detection studies have been limited to remote sensing images of earth. We propose a new deep learning approach for binary patch-level change detection involving transfer learning and nonlinear dimensionality reduction using convolutional autoencoders. Our experiments on diverse remote sensing datasets of Mars, the moon, and earth show that our methods can detect meaningful changes with high accuracy using a relatively small training dataset despite significant differences in illumination, image quality, imaging sensors, coregistration, and surface properties. We show that the latent representations learned by a convolutional autoencoder yield the most general representations for detecting change across surface feature types, scales, sensors, and planetary bodies.
Novelty Detection for Multispectral Images with Application to Planetary Exploration
Hannah R Kerner, Danika F Wellington, Kiri L Wagstaff, James F Bell, Chiman Kwan, Heni Ben Amor
AAAI Conference on Innovative Applications of Artificial Intelligence
Paper
View Abstract
In this work, we present a system based on convolutional autoencoders for detecting novel features in multispectral images. We introduce SAMMIE: Selections based on Autoencoder Modeling of Multispectral Image Expectations. Previous work using autoencoders employed the scalar reconstruction error to classify new images as novel or typical. We show that a spatial-spectral error map can enable both accurate classification of novelty in multispectral images as well as human-comprehensible explanations of the detection. We apply our methodology to the detection of novel geologic features in multispectral images of the Martian surface collected by the Mastcam imaging system on the Mars Science Laboratory Curiosity rover.
Clone Swarms: Learning to Predict and Control Multi-Robot Systems by Imitation
Siyu Zhou, Mariano J. Phielipp, Jorge A. Sefair, Sara I. Walker, Heni Ben Amor
Paper
View Abstract
In this paper, we propose SwarmNet — a neural network architecture that can learn to predict and imitate the behavior of an observed swarm of agents in a centralized manner. Tested on artificially generated swarm motion data, the network achieves high levels of prediction accuracy and imitation authenticity. We compare our model to previous approaches for modelling interaction systems and show how modifying components of other models gradually approaches the performance of ours. Finally, we also discuss an extension of SwarmNet that can deal with nondeterministic, noisy, and uncertain environments, as often found in robotics applications.
Improved Exploration through Latent Trajectory Optimization in Deep Deterministic Policy Gradient
Kevin Sebastian Luck, Mel Vecerik, Simon Stepputtis, Heni Ben Amor and Jonathan Scholz
IROS 2019
Paper
View Abstract
Model-free reinforcement learning algorithms such as Deep Deterministic Policy Gradient (DDPG) often require additional exploration strategies, especially if the actor is of deterministic nature. This work evaluates the use of model-based trajectory optimization methods used for exploration in Deep Deterministic Policy Gradient when trained on a latent image embedding. In addition, an extension of DDPG is derived using a value function as critic, making use of a learned deep dynamics model to compute the policy gradient. This approach leads to a symbiotic relationship between the deep reinforcement learning algorithm and the latent trajectory optimizer. The trajectory optimizer benefits from the critic learned by the RL algorithm and the latter from the enhanced exploration generated by the planner. The developed methods are evaluated on two continuous control tasks, one in simulation and one in the real world. In particular, a Baxter robot is trained to perform an insertion task, while only receiving sparse rewards and images as observations from the environment.
Imitation Learning of Robot Policies by Combining Language, Vision and Demonstration
Simon Stepputtis, Joseph Campbell, Mariano Phielipp, Chitta Baral, Heni Ben Amor
NeurIPS 2019 Workshop on Robot Learning: Control and Interaction in the Real World
Paper
View Abstract
In this work we propose a novel end-to-end imitation learning approach which combines natural language, vision, and motion information to produce an abstract representation of a task, which in turn is used to synthesize specific motion controllers at run-time. This multimodal approach enables generalization to a wide variety of environmental conditions and allows an end-user to direct a robot policy through verbal communication. We empirically validate our approach with an extensive set of simulations and show that it achieves a high task success rate over a variety of conditions while remaining amenable to probabilistic interpretability.
Multimodal Dataset of Human-Robot Hugging Interaction
Kunal Bagewadi, Joseph Campbell, Heni Ben Amor
Artificial Intelligence for Human-Robot Interaction (AI-HRI) 2019, AAAI Fall Symposium Series
Paper
View Abstract
A hug is a tight embrace and an expression of warmth, sympathy and camaraderie. Despite the fact that a hug often only takes a few seconds, it is filled with details and nuances and is a highly complex process of coordination between two agents. For human-robot collaborative tasks, it is necessary for humans to develop trust and see the robot as a partner to perform a given task together. Datasets representing agent-agent interaction are scarce and, if available, of limited quality. To study the underlying phenomena and variations in a hug between a person and a robot, we deployed Baxter humanoid robot and wearable sensors on persons to record 353 episodes of hugging activity. 33 people were given minimal instructions to hug the humanoid robot for as natural hugging interaction as possible. In the paper, we present our methodology and analysis of the collected dataset. The use of this dataset is to implement machine learning methods for the humanoid robot to learn to anticipate and react to the movements of a person approaching for a hug. In this regard, we show the significance of the dataset by highlighting certain features in our dataset.
Learning Interactive Behaviors for Musculoskeletal Robots Using Bayesian Interaction Primitives
Joseph Campbell, Arne Hitzmann, Simon Stepputtis, Shuhei Ikemoto, Koh Hosoda, Heni Ben Amor
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2019
Paper
View Abstract
Musculoskeletal robots that are based on pneumatic actuation have a variety of properties, such as compliance and back-drivability, that render them particularly appealing for human-robot collaboration. However, programming interactive and responsive behaviors for such systems is extremely challenging due to the nonlinearity and uncertainty inherent to their control. In this paper, we propose an approach for learning Bayesian Interaction Primitives for musculoskeletal robots given a limited set of example demonstrations. We show that this approach is capable of real-time state estimation and response generation for interaction with a robot for which no analytical model exists. Human-robot interaction experiments on a’handshake’task show that the approach generalizes to new positions, interaction partners, and movement velocities.
Probabilistic Multimodal Modeling for Human-Robot Interaction Tasks
Joseph Campbell, Simon Stepputtis, Heni Ben Amor
Robotics: Science and Systems (RSS) 2019
Paper
View Abstract
Human-robot interaction benefits greatly from multimodal sensor inputs as they enable increased robustness and generalization accuracy. Despite this observation, few HRI methods are capable of efficiently performing inference for multimodal systems. In this work, we introduce a reformulation of Interaction Primitives which allows for learning from demonstration of interaction tasks, while also gracefully handling nonlinearities inherent to multimodal inference in such scenarios. We also empirically show that our method results in more accurate, more robust, and faster inference than standard Interaction Primitives and other common methods in challenging HRI scenarios.
2018
Context-dependent image quality assessment of JPEG compressed Mars Science Laboratory Mastcam images using convolutional neural networks
Hannah R. Kerner, James F. Bell III, Heni Ben Amor
Computers & Geosciences 2018 (Volume 118)
Paper
View Abstract
The Mastcam color imaging system on the Mars Science Laboratory Curiosity rover acquires images that are often JPEG compressed before being downlinked to Earth. Depending on the context of the observation, this compression can result in image artifacts that might introduce problems in the scientific interpretation of the data and might require the image to be retransmitted losslessly. We propose to streamline the tedious process of manually analyzing images using context-dependent image quality assessment, a process wherein the context and intent behind the image observation determine the acceptable image quality threshold. We propose a neural network solution for estimating the probability that a Mastcam user would find the quality of a compressed image acceptable for science analysis. We also propose an automatic labeling method that avoids the need for domain experts to label thousands of training examples. We performed multiple experiments to evaluate the ability of our model to assess context-dependent image quality, the efficiency a user might gain when incorporating our model, and the uncertainty of the model given different types of input images. We compare our approach to the state of the art in no-reference image quality assessment. Our model correlates well with the perceptions of scientists assessing context-dependent image quality and could result in significant time savings when included in the current Mastcam image review process.
One-shot learning of human–robot handovers with triadic interactionmeshes
David Vogt, Simon Stepputtis, Bernhard Jung, Heni Ben Amor
Autonomous Robots (2018)
Paper
View Abstract
We propose an imitation learning methodology that allows robots to seamlessly retrieve and pass objects to and from human users. Instead of hand-coding interaction parameters, we extract relevant information such as joint correlations and spatial relationships from a single task demonstration of two humans. At the center of our approach is an interaction model that enables a robot to generalize an observed demonstration spatially and temporally to new situations. To this end, we propose a data-driven method for generating interaction meshes that link both interaction partners to the manipulated object. The feasibility of the approach is evaluated in a within user study which shows that human–human task demonstration can lead to more natural and intuitive interactions with the robot.
Mediating Human-Robot Collaboration throughMixed Reality Cues
Ramsundar Kalpagam Ganesan, Yash K. Rathore, Heather M. Ross, Heni Ben Amor
IEEE Robotics and Automation Magazine
Paper
View Abstract
In this paper, we present a communication paradigmusing a context-aware mixed reality approach for instructing hu-man workers when collaborating with robots. The main objectiveof this approach is to utilize the physical work environment asa canvas to communicate task-related instructions and robotintentions in the form of visual cues. A vision-based objecttracking algorithm is used to precisely determine the poseand state of physical objects in and around the workspace. Aprojection mapping technique is used to overlay visual cues onthe tracked objects and the workspace. Simultaneous trackingand projection onto objects enable the system to provide just-in-time instructions for carrying out a procedural task. Additionally,the system can also inform and warn humans about the intentionsof the robot and safety of the workspace. We hypothesized thatusing this system for executing a human-robot collaborativetask will improve the overall performance of the team andprovide a positive experience to the human partner. To testthis hypothesis, we conducted an experiment involving humansubjects and compared the performance (both objective andsubjective) of the presented system with conventional forms ofcommunication, namely printed and mobile display instructions.We found that projecting visual cues enabled human subjects tocollaborate more effectively with the robot and resulted in higherefficiency in completing the task.
Deep Predictive Models for Collision Risk Assessment in Autonomous
Driving
Mark Strickland, Georgios Fainekos, Heni Ben Amor
International Conference on Robotics and Automation (ICRA) 2018
Paper
View Abstract
In this paper, we investigate a predictive approach for collision risk assessment in autonomous and assisted driving. A deep predictive model is trained to anticipate imminent accidents from traditional video streams. In particular, the model learns to identify cues in RGB images that are predictive of hazardous upcoming situations. In contrast to previous work, our approach incorporates (a) temporal information during decision making, (b) multi-modal information about the environment, as well as the proprioceptive state and steering actions of the controlled vehicle, and (c) information about the uncertainty inherent to the task. To this end, we discuss Deep Predictive Models and present an implementation using a Bayesian Convolutional LSTM. Experiments in a simple simulation environment show that the approach can learn to predict impending accidents with reasonable accuracy, especially when multiple cameras are used as input sources.
Extrinsic Dexterity through Active Slip Control using Deep Predictive Models
Driving
Simon Stepputtis, Yezhou Yang and Heni Ben Amor
International Conference on Robotics and Automation (ICRA) 2018
Paper
View Abstract
We present a machine learning methodology for actively controlling slip, in order to increase robot dexterity. Leveraging recent insights in deep learning, we propose a Deep Predictive Model that uses tactile sensor information to reason about slip and its future influence on the manipulated object. The obtained information is then used to precisely manipulate objects within a robot end-effector using external perturbations imposed by gravity or acceleration. We show in a set of experiments that this approach can be used to increase a robot’s repertoire of motor skills.
2017
Bayesian Interaction Primitives: A SLAM Approach to Human-Robot Interaction
Joseph Campbell and Heni Ben Amor
Conference on Robot Learning (CORL) 2017
Paper | Library
View Abstract
This paper introduces a fully Bayesian reformulation of Interaction Primitives for human-robot interaction and collaboration. A key insight is that a subset of human-robot interaction is conceptually related to simultaneous localization and mapping techniques. Leveraging this insight we can significantly increase the accuracy of temporal estimation and inferred trajectories while simultaneously reducing the associated computational complexity. We show that this enables more complex human-robot interaction scenarios involving more degrees of freedom.
Extracting Bimanual Synergies with Reinforcement Learning
Kevin Sebastian Luck and Heni Ben Amor
International Conference on Intelligent Robots and Systems (IROS) 2017
Paper
View Abstract
Motor synergies are an important concept in human motor control. Through the co-activation of multiple muscles, complex motion involving many degrees-of-freedom can be generated. However, leveraging this concept in robotics typically entails using human data that may be incompatible for the kinematics of the robot. In this paper, our goal is to enable a robot to identify synergies for low-dimensional control using trial-and-error only. We discuss how synergies can be learned through latent space policy search and introduce an extension of the algorithm for the re-use of previously learned synergies for exploration. The application of the algorithm on a bimanual manipulation task for the Baxter robot shows that performance can be increased by reusing learned synergies intra-task when learning to lift objects. But the reuse of synergies between two tasks with different objects did not lead to a significant improvement.
Robots that Anticipate Pain: Anticipating Physical Perturbations from Visual Cues through Deep Predictive Models
Indranil Sur and Heni Ben Amor
International Conference on Intelligent Robots and Systems (IROS) 2017
Paper
View Abstract
To ensure system integrity, robots need to proactively avoid any unwanted physical perturbation that may cause damage to the underlying hardware. In this paper, we investigate a machine learning approach that allows robots to anticipate impending physical perturbations from perceptual cues. In contrast to other approaches that require knowledge about sources of perturbation to be encoded before deployment, our method is based on experiential learning. Robots learn to associate visual cues with subsequent physical perturbations and contacts. In turn, these extracted visual cues are then used to predict potential future perturbations acting on the robot. To this end, we introduce a novel deep network architecture which combines multiple sub-networks for dealing with robot dynamics and perceptual input from the environment. We present a self-supervised approach for training the system that does not require any labeling of training data. Extensive experiments in a human-robot interaction task show that a robot can learn to predict physical contact by a human interaction partner without any prior information or labeling.
From the Lab to the Desert: Fast Prototyping and Learning of Robot Locomotion
Kevin Sebastian Luck, Joseph Campbell, Andrew Jansen, Daniel M. Aukes and Heni Ben Amor
Robotics: Science and Systems 2017
Preliminary Version | Paper
View Abstract
In this paper, we discuss a methodology for fast prototyping of morphologies and controllers for robot locomotion. Going beyond simulation-based approaches, we argue that the form and function of a robot, as well as their interplay with real-world environmental conditions are critical. Hence, fast design and learning cycles are necessary to adapt robot shape and behavior to their environment. To this end, we present a combination of laminate robot manufacturing and sample-efficient reinforcement learning. We leverage this methodology to conduct an extensive robot learning experiment. Inspired by locomotion in sea turtles, we design a low-cost crawler robot with variable, interchangeable fins. Learning is performed with different bio-inspired and original fin designs in both an indoor, artificial environment, as well as a natural environment in the Arizona desert. The findings of this study show that static policies developed in the laboratory do not translate to effective locomotion strategies in natural environments. In contrast to that, sample-efficient reinforcement learning can help to rapidly accommodate changes in the environment or the robot.
Bio-inspired Robot Design Considering Load-bearing and Kinematic Ontogeny of Chelonioidea Sea Turtles
Andrew Jansen, Kevin Sebastian Luck, Joseph Campbell, Heni Ben Amor and Daniel M. Aukes
Living Machines 2017
Preliminary Version | Paper
View Abstract
This work explores the physical implications of variation in fin shape and orientation that correspond to ontogenetic changes observed in sea turtles. Through the development of a bio-inspired robotic platform – CTurtle – we show that 1) these ontogenetic changes apparently occupy stable extrema for either load-bearing or high-velocity movement, and 2) mimicry of these variations in a robotic system confer greater load-bearing capacity and energy eciency, at the expense of velocity (or vice-versa). A possible means of adapting to load conditions is also proposed. We endeavor to provide these results as part of a theoretical framework integrating biological inquiry and inspiration within an iterative design cycle based on laminate robotics.
A System for Learning Continuous Human-Robot Interactions from Human-Human Demonstrations
David Vogt, Simon Stepputtis, Steve Grehl, Bernhard Jung, Heni Ben Amor
International Conference on Robotics and Automation 2017
Preliminary Version | Paper
View Abstract
We present a data-driven imitation learning system for learning human-robot interactions from human-human demonstrations. During training, the movements of two interaction partners are recorded through motion capture and an interaction model is learned. At runtime, the interaction model is used to continuously adapt the robot’s motion, both spatially and temporally, to the movements of the human interaction partner. We show the effectiveness of the approach on complex, sequential tasks by presenting two applications involving collaborative human-robot assembly. Experiments with varied object hand-over positions and task execution speeds confirm the capabilities for spatio-temporal adaption of the demonstrated behavior to the current situation.
2016
Traffic Light Status Detection Using Movement Patterns of Vehicles
Campbell, J; Ben Amor, H.; Ang, M.; Fainekos, G.
International Conference on Intelligent Transportation Systems 2016
Paper
View Abstract
Vision-based methods for detecting the status of traffic lights used in autonomous vehicles may be unreliable due to occluded views, poor lighting conditions, or a dependence on unavailable high-precision meta-data, which is troublesome in such a safety-critical application. This paper proposes a complementary detection approach based on an entirely new source of information: the movement patterns of other nearby vehicles. This approach is robust to traditional ources of error, and may serve as a viable supplemental detection method. Several different classification models are presented for inferring traffic light status based on these patterns. Their performance is evaluated over real and simulated data sets, resulting in up to 97% accuracy in each set.
Projecting Robot Intentions into Human Environments
Andersen, R.; Madsen, O.; Moeslund, B; Ben Amor, H.
International Symposium on Robot and Human Interactive Communication 2016
Paper
View Abstract
Trained human co-workers can often easily predict each other’s intentions based on prior experience. When collaborating with a robot coworker, however, intentions are hard or impossible to infer. This difficulty of mental introspection makes human-robot collaboration challenging and can lead to dangerous misunderstandings. In this paper, we present a novel, object-aware projection technique that allows robots to visualize task information and intentions on physical objects in the environment. The approach uses modern object tracking methods in order to display information at specific spatial locations taking into account the pose and shape of surrounding objects. As a result, a human co-worker can be informed in a timely manner about the safety of the workspace, the site of next robot manipulation tasks, and next subtasks to perform. A preliminary usability study compares the approach to collaboration approaches based on monitors and printed text. The study indicates that, on average, the user effectiveness and satisfaction is higher with the projection based approach.
Estimating Perturbations from Experience using Neural Networks and Information Transfer
Berger, E.; Vogt, D.; Grehl, S.l; Jung, B.; Ben Amor, H
International Conference on Robotics and Automation 2016
Paper
View Abstract
In order to ensure safe operation, robots must be able to reliably detect behavior perturbations that result from unexpected physical interactions with their environment and human co-workers. While some robots provide firmware force sensors that generate rough force estimates, more accurate force measurements are usually achieved with dedicated force-torque sensors. However, such sensors are often heavy, expensive and require an additional power supply. In the case of lightweight manipulators, the already limited payload capabilities may be reduced in a significant way. This paper presents an experience-based approach for accurately estimating external forces being applied to a robot without the need for a force-torque sensor. Using Information Transfer, a subset of sensors relevant to the executed behavior are identified from a larger set of internal sensors. Models mapping robot sensor data to force-torque measurements are learned using a neural network. These models can be used to predict the magnitude and direction of perturbations from affordable, proprioceptive sensors only. Experiments with a UR5 robot show that our method yields force estimates with accuracy comparable to a dedicated force-torque sensor. Moreover, our method yields a substantial improvement in accuracy over force-torque values provided by the robot firmware.
Directing Policy Search with Interactively Taught Via-Point
Schroecker, Y; Ben Amor, H., Thomaz, A.
International Conference on Autonomous Agents and Multiagent Systems 2016
Paper
View Abstract
Policy search has been successfully applied to robot motor learning problems. However, for moderately complex tasks the necessity of good heuristics or initialization still arises. One method that has been used to alleviate this problem is to utilize demonstrations obtained by a human teacher as a starting point for policy search in the space of trajectories. In this paper we describe an alternative way of giving demonstrations as soft via-points and show how they can be used for initialization as well as for active corrections during the learning process. With this approach, we restrict the search space to trajectories that will be close to the taught via-points at the taught time and thereby significantly reduce the number of samples necessary to learn a good policy. We show with a simulated robot arm that our method can efficiently learn to insert an object in a hole with just a minimal demonstration and evaluate our method further on a synthetic letter reproduction task.
Experience-based Torque Estimation for an Industrial Robot
Berger, E.; Grehl, S.; Vogt, D.; Jung, B.; Ben Amor, H.
International Conference on Robotics and Automation 2016
Paper
View Abstract
Robotic manipulation tasks often require the control of forces and torques exerted on external objects. This paper presents a machine learning approach for estimating forces when no force sensors are present on the robot platform. In the training phase, the robot executes the desired manipulation tasks under controlled conditions with systematically varied parameter sets. All internal sensor data, in the presented case from more than 100 sensors, as well as the force exerted by the robot are recorded. Using Transfer Entropy, a statistical model is learned that identifies the subset of sensors relevant for torque estimation in the given task. At runtime, the model is used to accurately estimate the torques exerted during manipulations of the demonstrated kind. The feasibility of the approach is shown in a setting where a robotic manipulator operates a torque wrench to fasten a screw nut. Torque estimates with an accuracy of well below ±1Nm are achieved. A strength of the presented model is that no prior knowledge of the robot’s kinematics, mass distribution or sensor instrumentation is required.
Sparse Latent Space Policy Search,
Luck, K.S.; Pajarinen, J.; Erik Berger, E.; Kyrki, V. ; Ben Amor, H.
Conference on Artificial Intelligence 2016
Paper | Website | Code
View Abstract
Computational agents often need to learn policies that involve many control variables, e.g., a robot needs to control several joints simultaneously. Learning a policy with a high number of parameters, however, usually requires a large number of training samples. We introduce a reinforcement learning method for sampleefficient policy search that exploits correlations between control variables. Such correlations are particularly frequent in motor skill learning tasks. The introduced method uses Variational Inference to estimate policy parameters, while at the same time uncovering a low-dimensional latent space of controls. Prior knowledge about the task and the structure of the learning agent can be provided by specifying groups of potentially correlated parameters. This information is then used to impose sparsity constraints on the mapping between the high-dimensional space of controls and a lowerdimensional latent space. In experiments with a simulated bi-manual manipulator, the new approach effectively identifies synergies between joints, performs efficient low-dimensional policy search, and outperforms state-of-the-art policy search methods.
2015
Estimation of Perturbations in Robot Behaviors using Dynamic Mode Decomposition
Berger, E.; Müller., D.; Vogt, D.; Jung, B.; Ben Amor, H.
Advanced Robotics, Robotics Society of Japan 2015
Paper | Video
View Abstract
Physical human–robot interaction tasks require robots that can detect and react to external perturbations caused by the human partner. In this contribution, we present a machine learning approach for detecting, estimating, and compensating for such external perturbations using only input from standard sensors. This machine learning approach makes use of Dynamic Mode Decomposition (DMD), a data processing technique developed in the field of fluid dynamics, which is applied to robotics for the first time. DMD is able to isolate the dynamics of a nonlinear system and is therefore well suited for separating noise from regular oscillations in sensor readings during cyclic robot movements. In a training phase, a DMD model for behavior-specific parameter configurations is learned. During task execution, the robot must estimate the external forces exerted by a human interaction partner. We compare the DMD-based approach to other interpolation schemes. A variant, sparsity promoting DMD, is particularly well suited for high-noise sensors. Results of a user study show that our DMD-based machine learning approach can be used to design physical human–robot interaction techniques that not only result in robust robot behavior but also enjoy a high usability.
Occlusion Aware Object Localization, Segmentation and Pose Estimation
Brahmbhatt, S.; Ben Amor, H., Christensen, H.
British Machine Vision Conference 2015
Paper
View Abstract
We present a learning approach for localization and segmentation of objects in an image in a manner that is robust to partial occlusion. Our algorithm produces a bounding box around the full extent of the object and labels pixels in the interior that belong to the object. Like existing segmentation aware detection approaches, we learn an appearance model of the object and consider regions that do not fit this model as potential occlusions. However, in addition to the established use of pairwise potentials for encouraging local consistency, we use higher order potentials which capture information at the level of im- age segments. We also propose an efficient loss function that targets both localization and segmentation performance. Our algorithm achieves 13.52% segmentation error and 0.81 area under the false-positive per image vs. recall curve on average over the challenging CMU Kitchen Occlusion Dataset. This is a 42.44% decrease in segmentation error and a 16.13% increase in localization performance compared to the state-of-the-art. Finally, we show that the visibility labelling produced by our algorithm can make full 3D pose estimation from a single image robust to occlusion.
A Taxonomy of Benchmark Tasks for Bimanual Manipulators
Quispe, A. H.; Ben Amor, H.; Henrik Christensen, H.
International Symposium on Robotics Research 2015
Exploiting Symmetries and Extrusions for Grasping Household Objects
Quispe, A. H.; Milville, B.; Gutierrez, M.; Erdogan, C.; Stilman, M; Christensen, H.; Ben Amor, H.
International Conference on Robotics and Automation 2015
Paper
View Abstract
In this paper we present an approach for creating complete shape representations from a single depth image for robot grasping. We introduce algorithms for completing partial point clouds based on the analysis of symmetry and extrusion patterns in observed shapes. Identified patterns are used to generate a complete mesh of the object, which is, in turn, used for grasp planning. The approach allows robots to predict the shape of objects and include invisible regions into the grasp planning step. We show that the identification of shape patterns, such as extrusions, can be used for fast generation and optimization of grasps. Finally, we present experiments performed with our humanoid robot executing pick-up tasks based on single depth images and discuss the applications and shortcomings of our approach.
Learning Multiple Collaborative Tasks with a Mixture of Interaction Primitives
Ewerton, M.; Neumann, G.; Lioutikov, R.; Ben Amor, H.; Peters, J.; Maeda, G.
International Conference on Robotics and Automation 2015
Paper
View Abstract
Robots that interact with humans must learn to not only adapt to different human partners but also to new interactions. Such a form of learning can be achieved by demonstrations and imitation. A recently introduced method to learn interactions from demonstrations is the framework of Interaction Primitives. While this framework is limited to represent and generalize a single interaction pattern, in practice, interactions between a human and a robot can consist of many different patterns. To overcome this limitation this paper proposes a Mixture of Interaction Primitives to learn multiple interaction patterns from unlabeled demonstrations. Specifically the proposed method uses Gaussian Mixture Models of Interaction Primitives to model nonlinear correlations between the movements of the different agents. We validate our algorithm with two experiments involving interactive tasks between a human and a lightweight robotic arm. In the first, we compare our proposed method with conventional Interaction Primitives in a toy problem scenario where the robot and the human are not linearly correlated. In the second, we present a proof-of-concept experiment where the robot assists a human in assembling a box.
2014
Special issue on autonomous grasping and manipulation
Ben Amor, H.; Saxena, A.; Hudson, N.; Peters, J.
Autonomous Robots Journal
Paper
View Abstract
Grasping and manipulation of objects are essential motor skills for robots to interact with their environment and perform meaningful, physical tasks. Since the dawn of robotics, grasping and manipulation have formed a core research field with a large number of dedicated publications. The field has reached an important milestone in recent years as various robots can now reliably perform basic grasps on unknown objects. However, these robots are still far from being capable of human-level manipulation skills including in-hand or bimanual manipulation of objects, interactions with non-rigid objects, and multi-object tasks such as stacking and tool-usage. Progress on such advanced manipulation skills is slowed down by requiring a successful combination of a multitude of different methods and technologies, e.g., robust vision, tactile feedback, grasp stability analysis, modeling of uncertainty, learning, long-term planning, and much more. In order to address these difficult issues, there have been an increasing number of governmental research programs such as the European projects DEXMART, GeRT and GRASP, and the American DARPA Autonomous Robotic Manipulation (ARM) project. This increased interest has become apparent in several international workshops at important robotics conferences, such as the well-attended workshop “Beyond Robot Grasping” at IROS 2012 in Portugal. Hence, this special issue of the Autonomous Robots journal aims at presenting important recent success stories in the development of advanced robot grasping and manipulation abilities. The issue covers a wide range of different papers that are representative of the current state-of-the-art within the field. Papers were solicited with an open call that was circulated in the 4 months preceding the deadline. As a result, we have received 37 submissions to the special issue which were rigorously reviewed by up to four reviewers as well as by at least one of the guest editors. Altogether twelve papers were selected for publication in this special issue. We are in particular happy to include four papers which detail the approach and goal of the DARPA ARM project as well as detailed descriptions of the developed methods.
Interaction Primitives for Human-Robot Cooperation Tasks
Ben Amor, H.; Neumann, G.; Kamthe, S.; Kroemer, O.; Peters, J.
International Conference on Robotics and Automation 2014
Paper
View Abstract
To engage in cooperative activities with human partners, robots have to possess basic interactive abilities and skills. However, programming such interactive skills is a challenging task, as each interaction partner can have different timing or an alternative way of executing movements. In this paper, we propose to learn interaction skills by observing how two humans engage in a similar task. To this end, we introduce a new representation called Interaction Primitives. Interaction primitives build on the framework of dynamic motor primitives (DMPs) by maintaining a distribution over the parameters of the DMP. With this distribution, we can learn the inherent correlations of cooperative activities which allow us to infer the behavior of the partner and to participate in the cooperation. We will provide algorithms for synchronizing and adapting the behavior of humans and robots during joint physical activities.
Transfer Entropy for Feature Extraction in Physical Human-Robot Interaction: Detecting Perturbations from Low-Cost Sensors
Berger, E.; Müller., D.; Vogt, D.; Jung, B.; Ben Amor, H.
International Conference on Humanoid Robots 2014
Paper
View Abstract
In physical human-robot interaction, robot behavior must be adjusted to forces applied by the human interaction partner. For measuring such forces, special-purpose sensors may be used, e.g. force-torque sensors, that are however often heavy, expensive and prone to noise. In contrast, we propose a machine learning approach for measuring external perturbations of robot behavior that uses commonly available, low-cost sensors only. During the training phase, behavior-specific statistical models of sensor measurements, so-called perturbation filters, are constructed using Principal Component Analysis, Transfer Entropy and Dynamic Mode Decomposition. During behavior execution, perturbation filters compare measured and predicted sensor values for estimating the amount and direction of forces applied by the human interaction partner. Such perturbation filters can therefore be regarded as virtual force sensors that produce continuous estimates of external forces.
Learning Interaction for Collaborative Tasks with Probabilistic Movement Primitives
Maeda, G.J.; Ewerton, M.; Lioutikov, R.; Ben Amor, H.; Peters, J.; Neumann, G.
International Conference on Humanoid Robots 2014
Paper
View Abstract
This paper proposes a probabilistic framework based on movement primitives for robots that work in collaboration with a human coworker. Since the human coworker can execute a variety of unforeseen tasks a requirement of our system is that the robot assistant must be able to adapt and learn new skills on-demand, without the need of an expert programmer. Thus, this paper leverages on the framework of imitation learning and its application to human-robot interaction using the concept of Interaction Primitives (IPs). We introduce the use of Probabilistic Movement Primitives (ProMPs) to devise an interaction method that both recognizes the action of a human and generates the appropriate movement primitive of the robot assistant. We evaluate our method on experiments using a lightweight arm interacting with a human partner and also using motion capture trajectories of two humans assembling a box. The advantages of ProMPs in relation to the original formulation for interaction are exposed and compared.
Online Multi-Camera Registration for Bimanual Workspace Trajectories
Dantam, N.; Ben Amor, H; Christensen, H.; Stilman, M.
International Conference on Humanoid Robots 2014
Paper
View Abstract
We demonstrate that millimeter-level bimanual manipulation accuracy can be achieved without the static camera registration typically required for visual servoing. We register multiple cameras online, converging in seconds, by visually tracking features on the robot hands and filtering the result. Then, we compute and track continuous-velocity relative workspace trajectories for the end-effector. We demonstrate the approach using Schunk LWA4 and SDH manipulators and Logitech C920 cameras, showing accurate relative positioning for pen-capping and object hand-off tasks. Our filtering software is available under a permissive license.
Latent Space Policy Search for Robotics
Luck, K.S.; Neumann, G.; Berger, E.; Peters, J.; Ben Amor, H.
International Conference on Intelligent Robots and Systems 2014
Paper
View Abstract
Learning motor skills for robots is a hard task. In particular, a high number of degrees-of-freedom in the robot can pose serious challenges to existing reinforcement learning methods, since it leads to a high-dimensional search space. However, complex robots are often intrinsically redundant systems and, therefore, can be controlled using a latent manifold of much smaller dimensionality. In this paper, we present a novel policy search method that performs efficient reinforcement learning by uncovering the low-dimensional latent space of actuator redundancies. In contrast to previous attempts at combining reinforcement learning and dimensionality reduction, our approach does not perform dimensionality reduction as a preprocessing step but naturally combines it with policy search. Our evaluations show that the new approach outperforms existing algorithms for learning motor skills with high-dimensional robots.
Online Camera Registration for Robot Manipulation
Dantam, N.; Ben Amor, H; Christensen, H.; Stilman, M.
International Symposium on Experimental Robotics 2014
Paper
View Abstract
We demonstrate that millimeter-level manipulation accuracy can be achieved without the static camera registration typically required for visual servoing. We register the camera online, converging in seconds, by visually tracking features on the robot and filtering the result. This online registration handles cases such as perturbed camera positions, wear and tear on camera mounts, and even a camera held by a human. We implement the approach on a Schunk LWA4 manipulator and Logitech C920 camera, servoing to target and pre-grasp configurations. Our filtering software is available under a permissive license.
Dynamic Mode Decomposition for Perturbation Estimation in Human-Robot Interaction
Berger, E.; Sastuba, M.; Vogt, D.; Jung, B.; Ben Amor, H.
International Symposium on Robot and Human Interactive Communication 2014
Paper
View Abstract
In many settings, e.g. physical human-robot interaction, robotic behavior must be made robust against more or less spontaneous application of external forces. Typically, this problem is tackled by means of special purpose force sensors which are, however, not available on many robotic platforms. In contrast, we propose a machine learning approach suitable for more common, although often noisy sensors. This machine learning approach makes use of Dynamic Mode Decomposition (DMD) which is able to extract the dynamics of a nonlinear system. It is therefore well suited to separate noise from regular oscillations in sensor readings during cyclic robot movements under different behavior configurations. We demonstrate the feasibility of our approach with an example where physical forces are exerted on a humanoid robot during walking. In a training phase, a snapshot based DMD model for behavior specific parameter configurations is learned. During task execution the robot must detect and estimate the external forces exerted by a human interaction partner. We compare the DMD-based approach to other interpolation schemes and show that the former outperforms the latter particularly in the presence of sensor noise. We conclude that DMD which has so far been mostly used in other fields of science, particularly fluid mechanics, is also a highly promising method for robotics.
A Data-Driven Method for Real-Time Character Animation in Human-Agent Interaction
Vogt, D.; Grehl, S.; Berger, E.; Ben Amor, H; Jung, B.
International Conference on Intelligent Virtual Agents 2014
Paper
View Abstract
We address the problem of creating believable animations for virtual humans that need to react to the body movements of a human interaction partner in real-time. Our data-driven approach uses prerecorded motion capture data of two interacting persons and performs motion adaptation during the live human-agent interaction. Extending the interaction mesh approach, our main contribution is a new scheme for efficient identification of motions in the prerecorded animation data that are similar to the live interaction. A global low-dimensional posture space serves to select the most similar interaction example, while local, more detail-rich posture spaces are used to identify poses closely matching the human motion. Using the interaction mesh of the selected motion example, an animation can then be synthesized that takes into account both spatial and temporal similarities between the prerecorded and live interactions.
2013
Probabilistic Movement Modeling for Intention Inference in Human-Robot Interaction
Wang, Z.; Muelling, K.; Deisenroth, M. P.; Ben Amor, H.; Vogt, D.; Schoelkopf, B.; Peters, J.
International Journal of Robotics Research, 32, 7, pp.841-858
Paper
View Abstract
Intention inference can be an essential step toward efficient human-robot interaction. For this purpose, we propose the Intention-Driven Dynamics Model (IDDM) to probabilistically model the generative process of movements that are directed by the intention. The IDDM allows to infer the intention from observed movements using Bayes’ theorem. The IDDM simultaneously finds a latent state representation of noisy and high dimensional observations, and models the intention-driven dynamics in the latent states. As most robotics applications are subject to real-time constraints, we develop an efficient online algorithm that allows for real-time intention inference. Two human-robot interaction scenarios, i.e., target prediction for robot table tennis and action recognition for interactive humanoid robots, are used to evaluate the performance of our inference algorithm. In both intention inference tasks, the proposed algorithm achieves substantial improvements over support vector machines and Gaussian processes.
Learning Responsive Robot Behavior by Imitation
Ben Amor, H.; Vogt, D.; Ewerton, M.; Berger, E.; Jung, B.; Peters, J.
International Conference on Intelligent Robots and Systems 2013
Paper
View Abstract
In this paper we present a new approach for learning responsive robot behavior by imitation of human interaction partners. Extending previous work on robot imitation learning, that has so far mostly concentrated on learning from demonstrations by a single actor, we simultaneously record the movements of two humans engaged in on-going interaction tasks and learn compact models of the interaction. Extracted interaction models can thereafter be used by a robot to engage in a similar interaction with a human partner. We present two algorithms for deriving interaction models from motion capture data as well as experimental results on a humanoid robot.
Inferring Guidance Information in Cooperative Human-Robot Tasks
Berger, E.; Vogt, D.; Haji-Ghassemi, N.; Jung, B.; Ben Amor, H.
International Conference on Humanoid Robots 2013
Paper
View Abstract
In many cooperative tasks between a human and a robotic assistant, the human guides the robot by exerting forces, either through direct physical interaction or indirectly via a jointly manipulated object. These physical forces perturb the robot’s behavior execution and need to be compensated for in order to successfully complete such tasks. Typically, this problem is tackled by means of special purpose force sensors which are, however, not available on many robotic platforms. In contrast, we propose a machine learning approach based on sensor data, such as accelerometer and pressure sensor information. In the training phase, a statistical model of behavior execution is learned that combines Gaussian Process Regression with a novel periodic kernel. During behavior execution, predictions from the statistical model are continuously compared with stability parameters derived from current sensor readings. Differences between predicted and measured values exceeding the variance of the statistical model are interpreted as guidance information and used to adapt the robot’s behavior. Several examples of cooperative tasks between a human and a humanoid NAO robot demonstrate the feasibility of our approach.
2012
Mutual Learning and Adaptation in Physical Human-Robot Interaction
Ikemoto, S.; Ben Amor, H. ;Minato, T. ; Ishiguro, H. ; Jung, B.
IEEE Robotics and Automation 2012
Paper
View Abstract
Close physical interaction between robots and humans is a particularly challenging aspect of robot development. For successful interaction and cooperation, the robot must have the ability to adapt its behavior to the human counterpart. Based on our earlier work, we present and evaluate a computationally efficient machine learning algorithm that is well suited for such close-contact interaction scenarios. We show that this algorithm helps to improve the quality of the interaction between a robot and a human caregiver. To this end, we present two human-in-the-loop learning scenarios that are inspired by human parenting behavior, namely, an assisted standing-up task and an assisted walking task.
XSAMPL3D – An Action Description Language for the Animation of Virtual Characters
Vitzthum, A.; Ben Amor, H.; Heumer, G.; Jung, B.
Journal of Virtual Reality and Broadcasting, 9, 1
Paper
View Abstract
In this paper we present XSAMPL3D, a novel language for the high-level representation of actions performed on objects by (virtual) humans. XSAMPL3D was designed to serve as action representation language in an imitation-based approach to character animation: First, a human demonstrates a sequence of object manipulations in an immersive Virtual Reality (VR) environment. From this demonstration, an XSAMPL3D description is automatically derived that represents the actions in terms of high-level action types and involved objects. The XSAMPL3D action description can then be used for the synthesis of animations where virtual humans of different body sizes and proportions reproduce the demonstrated action. Actions are encoded in a compact and human-readable XML-format. Thus, XSAMPL3D describtions are also amenable to manual authoring, e.g. for rapid prototyping of animations when no immersive VR environment is at the animator’s disposal. However, when XSAMPL3D descriptions are derived from VR interactions, they can accomodate many details of the demonstrated action, such as motion trajectiories, hand shapes and other hand-object relations during grasping. Such detail would be hard to specify with manual motion authoring techniques only. Through the inclusion of language features that allow the repre sentation of all relevant aspects of demonstrated object manipulations, XSAMPL3D is a suitable action representation language for the imitation-based approach to character animation.
Probabilistic Modeling of Human Movements for Intention Inference
Wang, Z.;Deisenroth, M; Ben Amor, H.; Vogt, D.; Schoelkopf, B.; Peters, J.
Robotics: Science and Systems
Paper | Video
View Abstract
Inference of human intention may be an essential step towards understanding human actions and is hence important for realizing efficient human-robot interaction. In this paper, we propose the Intention-Driven Dynamics Model (IDDM), a latent variable model for inferring unknown human intentions. We train the model based on observed human movements/actions. We introduce an efficient approximate inference algorithm to infer the human’s intention from an ongoing movement. We verify the feasibility of the IDDM in two scenarios, i.e., target inference in robot table tennis and action recognition for interactive humanoid robots. In both tasks, the IDDM achieves substantial improvements over state-of-the-art regression and classification.
Maximally Informative Interaction Learning for Scene Exploration
van Hoof, H.; Kroemer, O.;Ben Amor, H.; Peters, J.
International Conference on Robot Systems
Paper
View Abstract
Creating robots that can act autonomously in dynamic, unstructured environments is a major challenge. In such environments, learning to recognize and manipulate novel objects is an important capability. A truly autonomous robot acquires knowledge through interaction with its environment without using heuristics or prior information encoding human domain insights. Static images often provide insufficient information for inferring the relevant properties of the objects in a scene. Hence, a robot needs to explore these objects by interacting with them. However, there may be many exploratory actions possible, and a large portion of these actions may be non-informative. To learn quickly and efficiently, a robot must select actions that are expected to have the most informative outcomes. In the proposed bottom-up approach, the robot achieves this goal by quantifying the expected informativeness of its own actions. We use this approach to segment a scene into its constituent objects as a first step in learning the properties and affordances of objects. Evaluations showed that the proposed information-theoretic approach allows a robot to efficiently infer the composite structure of its environment.
Generalization of Human Grasping for Multi-Fingered Robot Hands
Ben Amor, H.; Kroemer, O.; Hillenbrand, U.; Neumann, G.; Peters, J.
International Conference on Robot Systems 2012
Paper | Video
View Abstract
Multi-fingered robot grasping is a challenging problem that is difficult to tackle using hand-coded programs. In this paper we present an imitation learning approach for learning and generalizing grasping skills based on human demonstrations. To this end, we split the task of synthesizing a grasping motion into three parts: (1) learning efficient grasp representations from human demonstrations, (2) warping contact points onto new objects, and (3) optimizing and executing the reach-and-grasp movements. We learn low-dimensional latent grasp spaces for different grasp types, which form the basis for a novel extension to dynamic motor primitives. These latent-space dynamic motor primitives are used to synthesize entire reach-and-grasp movements. We evaluated our method on a real humanoid robot. The results of the experiment demonstrate the robustness and versatility of our approach.
Latent Space Policy Search for Robotics
Kroemer, O.; Ben Amor, H.; Ewerton, M.; Peters, J.
International Conference on Humanoid Robots 2012
Paper
2009
Kinesthetic Bootstrapping: Teaching Motor Skills to Humanoid Robots through Physical Interaction
Ben Amor, H. ; Berger, E. ; Vogt, D. ; Jung, B.
KI 2009: 32nd Annual Conference on Artificial Intelligence
Paper | Video
View Abstract
Programming of complex motor skills for humanoid robots can be a time intensive task, particularly within conventional textual or GUI-driven programming paradigms. Addressing this drawback, we propose a new programming-by-demonstration method called Kinesthetic Bootstrapping for teaching motor skills to humanoid robots by means of intuitive physical interactions. Here, “programming” simply consists of manually moving the robot’s joints so as to demonstrate the skill in mind. The bootstrapping algorithm then generates a low-dimensional model of the demonstrated postures. To find a trajectory through this posture space that corresponds to a robust robot motion, a learning phase takes place in a physics-based virtual environment. The virtual robot’s motion is optimized via a genetic algorithm and the result is transferred back to the physical robot. The method has been successfully applied to the learning of various complex motor skills such as walking and standing up.
Physical Interaction Learning: Behavior Adaptation in Cooperative Human-Robot Tasks Involving Physical Contact
Ikemoto, S.; Ben Amor, H. ; Minato, T.; Ishiguro, H.; Jung, B.
International Symposium on Robot and Human Interactive Communication
CoTeSys Best Paper Award | Paper
View Abstract
In order for humans and robots to engage in direct physical interaction several requirements have to be met. Among others, robots need to be able to adapt their behavior in order to facilitate the interaction with a human partner. This can be achieved using machine learning techniques. However, most machine learning scenarios to-date do not address the question of how learning can be achieved for tightly coupled, physical touch interactions between the learning agent and a human partner. This paper presents an example for such human in-the-loop learning scenarios and proposes a computationally cheap learning algorithm for this purpose. The efficiency of this method is evaluated in an experiment, where human care givers help an android robot to stand up.
Identifying Motion Capture Tracking Markers with Self-Organizing Maps
Weber, M.; Ben Amor, H.; Alexander, T.
IEEE Virtual Reality
Paper
View Abstract
Motion Capture (MoCap) describes methods and technologies for the detection and measurement of human motion in all its intricacies. Most systems use markers to track points on a body. Especially with natural human motion data captured with passive systems (to not hinder the participant) deficiencies like low accuracy of tracked points or even occluded markers might occur. Additionally, such MoCap data is often unlabeled. In consequence, the system does not provide information about which body landmarks the points belong to. Self-organizing neural networks, especially self-organizing maps (SOMs), are capable of dealing with such problems. This work describes a method to model, initialize and train such SOMs to track and label potentially noisy motion capture data.
2008
Grasp Synthesis from Low-Dimensional Probabilistic Grasp Models
Ben Amor, H.; Heumer, G.; Jung, B.; Vitzthum, A.
Journal of Computer Animation and Virtual Worlds, 19
Paper
View Abstract
We propose a novel data-driven animation method for the synthesis of natural looking human grasping. Motion data captured from human grasp actions is used to train a probabilistic model of the human grasp space. This model greatly reduces the high number of degrees of freedom of the human hand to a few dimensions in a continuous grasp space. The low dimensionality of the grasp space in turn allows for efficient optimization when synthesizing grasps for arbitrary objects. The method requires only a short training phase with no need for preprocessing of graphical objects for which grasps are to be synthesized.
Grasp Recognition for Uncalibrated Data Gloves – A Machine Learning Approach, Presence
Heumer, G. ; Ben Amor, H.; Jung, B.
17, MIT Press
Paper
View Abstract
This paper presents a comparison of various machine learning methods applied to the problem of recognizing grasp types involved in object manipulations performed with a data glove. Conventional wisdom holds that data gloves need calibration in order to obtain accurate results. However, calibration is a time-consuming process, inherently user-specific, and its results are often not perfect. In contrast, the present study aims at evaluating recognition methods that do not require prior calibration of the data glove. Instead, raw sensor readings are used as input features that are directly mapped to different categories of hand shapes. An experiment was carried out in which test persons wearing a data glove had to grasp physical objects of different shapes corresponding to the various grasp types of the Schlesinger taxonomy. The collected data was comprehensively analyzed using numerous classification techniques provided in an open-source machine learning toolbox. Evaluated machine learning methods are composed of (a) 38 classifiers including different types of function learners, decision trees, rule-based learners, Bayes nets, and lazy learners; (b) data preprocessing using principal component analysis (PCA) with varying degrees of dimensionality reduction; and (c) five meta-learning algorithms under various configurations where selection of suitable base classifier combinations was informed by the results of the foregoing classifier evaluation. Classification performance was analyzed in six different settings, representing various application scenarios with differing generalization demands. The results of this work are twofold: (1) We show that a reasonably good to highly reliable recognition of grasp types can be achieved— depending on whether or not the glove user is among those training the classifier— even with uncalibrated data gloves. (2) We identify the best performing classification methods for the recognition of various grasp types. To conclude, cumbersome calibration processes before productive usage of data gloves can be spared in many situations.
2007
Grasp Recognition with Uncalibrated Data Gloves – A Comparison of Classification Methods
Heumer, G.; Ben Amor, H.; Weber, M.; Jung, B.
IEEE Virtual Reality, IEEE
Paper
View Abstract
This paper presents a comparison of various classification methods for the problem of recognizing grasp types involved in object manipulations performed with a data glove. Conventional wisdom holds that data gloves need calibration in order to obtain accurate results. However, calibration is a time-consuming process, inherently user-specific, and its results are often not perfect. In contrast, the present study aims at evaluating recognition methods that do not require prior calibration of the data glove, by using raw sensor readings as input features and mapping them directly to different categories of hand shapes. An experiment was carried out, where test persons wearing a data glove had to grasp physical objects of different shapes corresponding to the various grasp types of the Schlesinger taxonomy. The collected data was analyzed with 28 classifiers including different types of neural networks, decision trees, Bayes nets, and lazy learners. Each classifier was analyzed in six different settings, representing various application scenarios with differing generalization demands. The results of this work are twofold: (1) We show that a reasonably well to highly reliable recognition of grasp types can be achieved – depending on whether or not the glove user is among those training the classifier – even with uncalibrated data gloves. (2) We identify the best performing classification methods for recognition of various grasp types. To conclude, cumbersome calibration processes before productive usage of data gloves can be spared in many situations.
A Neural Framework for Robot Motor Learning based on Memory Consolidation
Ben Amor, H.; Ikemoto, S.; Minato, T. ; Jung, B.; Ishiguro, H.
International Conference on Adaptive and Natural Computing Algorithms
Paper
View Abstract
Fixed sized neural networks are a popular technique for learning the adaptive control of non-linear plants. When applied to the complex control of android robots, however, they suffer from serious limitations, such as the moving target problem i.e. the interference between old and newly learned knowledge. To overcome these problems, we propose the use of growing neural networks in a new learning framework based on the process of consolidation. The new framework is able to overcome the drawbacks of sigmoidal neural networks, while maintaining their power of generalization. In experiments the framework was successfully applied to the control of an android robot.
2006
An Animation System for Imitation of Object Grasping in Virtual Reality
Weber, M.; Heumer, G.; Ben Amor, H.; Jung, B.
Advances in Artificial Reality and Tele-Existence
Paper
View Abstract
Interactive virtual characters are nowadays commonplace in games, animations, and Virtual Reality (VR) applications. However, relatively few work has so far considered the animation of interactive object manipulations performed by virtual humans. In this paper, we first present a hierarchical control architecture incorporating plans, behaviors, and motor programs that enables virtual humans to accurately manipulate scene objects using different grasp types. Furthermore, as second main contribution, we introduce a method by which virtual humans learn to imitate object manipulations performed by human VR users. To this end, movements of the VR user are analyzed and processed into abstract actions. A new data structure called grasp events is used for storing information about user interactions with scene objects. High-level plans are generated from grasp events to drive the virtual humans’ animation. Due to their high-level representation, recorded manipulations often naturally adapt to new situations without losing plausibility.
From Motion Capture to Action Capture: A Review of Imitation Learning Techniques and their Application to VR-based Character Animation
Jung, B.; Ben Amor, H.; Heumer, G.; Weber, M.
Thirteenth ACM Symposium on Virtual Reality Software and Technology, ACM Press
Paper | Video
View Abstract
We present a novel method for virtual character animation that we call action capture. In this approach, virtual characters learn to imitate the actions of Virtual Reality (VR) users by tracking not only the users’ movements but also their interactions with scene objects. Action capture builds on conventional motion capture but differs from it in that higher-level action representations are transferred rather than low-level motion data. As an advantage, the learned actions can often be naturally applied to varying situations, thus avoiding retargetting problems of motion capture. The idea of action capture is inspired by human imitation learning; related methods have been investigated for a longer time in robotics. The paper reviews the relevant literature in these areas before framing the concept of action capture in the context of VR-based character animation. We also present an example in which the actions of a VR user are transferred to a virtual worker.
Learning Android Control using Growing Neural Networks
Ben Amor, H.; Ikemoto, S.; Minato, T.; Ishiguro, H.
Proceedings of JSME Robotics and Mechatronics Conference ROBOMEC
View Abstract
Fixed sized neural networks are a popular technique for learning the adaptive control of non-linear plants. When applied to the complex control of android robots, however, they suffer from serious limitations, such as the moving target problem i.e. the interference between old and newly learned knowledge. To overcome these problems, we propose the use of growing neural networks in a new learning framework based on the process of consolidation. The new framework is able to overcome the drawbacks of sigmoidal neural networks, while maintaining their power of generalization. In experiments the framework was successfully applied to the control of an android robot.
2005
Intelligent Exploration for Genetic Algorithms. Using Self-Organizing Maps in Evolutionary Computation
Ben Amor, H.; Rettinger, A.
Proceedings of the 2005 Conference on Genetic and Evolutionary Computation, pp.1531-1538, ACM Press
Paper
View Abstract
Exploration vs. exploitation is a well known issue in Evolutionary Algorithms. Accordingly, an unbalanced search can lead to premature convergence. GASOM, a novel Genetic Algorithm, addresses this problem by intelligent exploration techniques. The approach uses Self-Organizing Maps to mine data from the evolution process. The information obtained is successfully utilized to enhance the search strategy and confront genetic drift. This way, local optima are avoided and exploratory power is maintained. The evaluation of GASOM on well known problems shows that it effectively prevents premature convergence and seeks the global optimum. Particularly on deceptive and misleading functions it showed outstanding performance. Additionally, representing the search history by the Self-Organizing Map provides a visually pleasing insight into the state and course of evolution.