So, Where’s My Robot?

Thoughts on Social Machine Learning

Interactive Robot Learning @ RSS 2008

Last weekend I ran an RSS workshop with Henrik Jacobsson, Danijel Skocaj, and Geert-Jan Kruijff. Today, I was writing up a breifing for the euCognition project and thought I’d also post that here. Soon, you’ll find all of the papers linked on the workshop website: http://www.dfki.de/cosy/www/events/InteractiveRobotLearning2008/

Interactive Robot Learning, RSS 2008

Many future applications for autonomous robots bring them into human environments as helpful assistants to untrained users in homes, offices, hospitals, and more. These applications will require robots to be able to quickly learn how to perform new tasks and skills from natural human instruction. The key here is to make it possible for the human to interact with the robot without having to read a manual.

The workshop on Interactive Robot Learning (IRL) was held at the Robotic Science and Systems 2008 conference. The discussion spanned the breadth of research questions at the intersection of Machine Learning and Human-Robot Interaction.

The workshop began with a keynote speaker, Jeff Orkin from MIT, with experience from the video game industry. Orkin gave an overview of a project in which he and his colleagues are collecting data from thousands of people playing a game called The Restaurant Game. In the game people act out a normal scene of being a waiter or a customer in a restaurant, thereby “teaching” the computer about social behavior and dialog that are common in this situation. One important thing that we learned from Orkin’s project related to IRL is that people will not always give perfect input or examples to your learning system. Therefore it is important to collect enough data and to use algorithms that let the anomalies wash out of the model.

In addition to invited keynote speakers we had 7 papers that were submitted, reviewed and selected to present at the workshop. In the morning session we had three of these speakers present. The first was Sylvain Calinon from EPFL, who spoke about a programming by demonstration framework that incorporates natural teaching mechanisms like paying attention to pointing and gaze direction of the teacher, and allowing the teacher to physically move the robot during training. Maren Bennewitz then presented a paper about recognizing gestures, like head nods, and hand gestures. Many of the gestures were quite generic and would have broad use in communicating to a robot learning system. Olaf Booij presented work on interactive mapping. In this project a robot learns a semantic map of places in a home, by clustering sensor data. It uses interaction with a human partner to simplify the task. In ambiguous situation it engages the human partner in a simple dialog, asking the name of the current location.

The afternoon session began with a keynote speaker, Jan Peters from the Max-Planck Institute of Biological Cybernetics. Peters works in robotics, nonlinear control, and machine learning. In his talk he covered a framework of motor skill learning for robots. This starts with parameterized motor primitives, used for movement generation, and then higher level tasks involve transforming these movements into motor commands. To achieve this Peters introduces an EM-based reinforcement learning algorithm, which learns smoothly without dangerous jumps in solution space. Additionally, learning smoothly in the solution space is likely to be the most understandable to a human teacher.

In the afternoon we had three more paper presentations. Two papers were in the realm of assistive robotics. Adriana Tapus presented a robot therapy system that learns and adapts on-line to personalize the therapy and maximize health benefits/outcomes. Their approach is a novel incremental learning method, positive results were found with both stroke rehabilitation patients and dementia patients. Ayanna Howard also presented an therapy application for interactive learning. Their goal is for the robot to be able to observe and evaluate a therapy patient’s exercises, assisting the job of the therapist. They presented two methods for learning to recognize therapy exercises from visual input. Jure Zabkar presented the final paper of this session about using qualitative representations in robot learning. Zabkar argued for this approach, as it results in models that are intuitive for non-expert human’s to inspect and understand, which is a key component of interactive learning.

The workshop ended with a final keynote speaker, Aude Billard from EPFL. Billard has made several contributions over the years, in the realm of robot programming by demonstration, and her talk covered many aspects of the work that her lab has tackled on the problem of imitation learning for robots. In particular their focus on the complementary problems of “what to imitate”, and “how to imitate.” The first is about determining the key components or features that really represent the goal of a task). Having a framework for determining “what to imitate” give the system means to generalize appropriately. Their approach is based on Gaussian Mixture Models. The “how to imitate” problem involves translating motions seen by a human to motions the robot itself can do, and also achieving the goal of the task. Their approach is a stable dynamical system, active in a hybrid cartesian-joint angle frame of reference. This ends up being able to handle perturbations in the environment, joint angle limits, and adapts to changes in target position.

July 3rd, 2008 Posted by | Conferences, HRI, Machine Learning | no comments

Enter your password to view comments.

No Comments »

No comments yet.

Leave a comment