So, Where’s My Robot?

Thoughts on Social Machine Learning

RO-MAN 2006

Earlier this month I attended the 15th IEEE Symposium on Robot and Human Interactive Communication (RO-MAN 2006), held just outside of London in Hatfield, UK home of the University of Hertfordshire (and the oldest pub in England!).

The conference was kicked off with a keynote talk by Shuji Hashimoto of Waseda University in Japan. Hashimoto-san’s talk characterized the big difference between this conference and most robotics conferences…the focus on the human element, the importance of designing robots that are social and fit seamlessly into our human world. In his talk he posited that the robot industry has not yet launched and will not launch until we usher in a new era of “partner” robots. And what he calls robots with “Kansei”, English translation: we need “machines with heart”. Which was a provocative enough topic to get a little press coverage.

There were many topics and papers at the conference relevant to this blog, but for now I’ll highlight some papers presented about learning and teamwork.


Y. Demiris, B. Khadhouri, “Content-Based Control of Goal-Directed Attention During Human Action Perception”
Yiannis Dimiris has a history of work in imitation learning, specifically on the ideas of learning forward models. This paper was about how to use forward models for goal-directed attention in an imitation task. The imitator should use the forward models to direct their attention to what object or part of the environment you would be controlling if your were doing the action the demonstrator is doing.

J. Sanders et al., “Using self-imitation to Direct Learning
This talk was interesting because it had some fun examples of social scaffolding in chimps. Mother chimps scaffold the child’s learning of nut cracking by putting the tools and nuts in favorable positions. Another example was with chimps Washoe and Loulis, Washoe taught Loulis American Sign Language by demonstration and by putting Loulis’ hand in the correct positions. Their point of these examples is the importance of a teacher being able to physically manipulate the learner as part of the learning process. This type of embodied demonstration is common in nature and often overlooked in robot learning.

C. Calinon and A. Billard, “Teaching a Humanoid Robot to Recognize and Reproduce Social Cues
This paper is mostly about a gesture recognition/learning system. But the part I find most interesting is that the goal of these social cues is to let the robot participate in an imitation game where social cues actually frame the learning process. This is in contrast to much of the work in Learning by Demonstration that assumes nicely segmented examples from the human teacher.

A. Thomaz, G. Hoffman, and C. Breazeal, “Reinforcement Learning with Human Teahers
Our paper was about an exploration of how everyday people approach the task of teaching a reinforcement learning agent in a computer game. They have more positive than negative feedback, they try to guide as well as feedback, and they adjust behavior as they develop a model of the learner.


There were two or three sessions on human-robot collaboration and teamwork. One aspect that’s interesting to note is that most were Wizard of Oz studies (i.e., there is a human remote controlling the robot during the task). Presumably because there aren’t any robots out there that are capable enough to participate in an interesting teamwork task with a human. So, I just think it’s interesting to think about what we’re learning from Wizard of Oz studies. The goal is for them to give us design principles for building autonomous robots. But to play devils advocate, by giving a robot a “human brain” and having it collaborate with a person, does this just reduce to a study of handicapped human-human collaboration? Anyhow, here are a couple of nice papers from the H-R collaboration sessions.

Kim and Hinds, “Who should I blame? Effects of Autonomy and Transparency on Attributions in Human Robot Itneraction”
In human robot teamwork, who gets credit and who gets blame. In this paper they report a series of studies looking at how different levels of autonomy and different levels of transparency (the robot explaining it’s behavior) effected people’s judgement. Increased autonomy of a robot, increases blame more than credit; But increased transparency decreases the blame toward the robot.

Oestreicher and Eklundh, “User Expectations on Human-Robot Cooperation”
Their study looks at public opinion about human robot teamwork. After interviewing people about what kinds of assistive robots they would want, the authors find that people want a partner not a tool, but they are weary of giving up control. My favorite example was a woman who loved gardening but was physically unable to do it anymore. She didn’t want a robot to do her gardening for her, she wanted the robot to enable her to again participate in this activity she loved.

September 19th, 2006 Posted by | Uncategorized | no comments

Enter your password to view comments.

No Comments »

No comments yet.

Leave a comment