So, Where’s My Robot?

Thoughts on Social Machine Learning

Junior Learns about Objects

We recently presented a paper in HRI 2009, “Learning about Objects from Human Teachers“.  One of my PhD students, Maya Cakmak, did a nice study looking at how everyday people (well, your average college student anyway) try to teach our small little upper-torso robot, Junior, what he can do with a simple set of objects.  Objects in the set have various “affordances,” for example, some things roll, others can be picked up, and a little box could be opened.

Our approach, Socially Guided Machine Learning, looks at ways that a human partner can intuitively help the robot learn.  The paper details three main points:

1) We conducted experiments with Junior, and made six observations characterizing how people approached teaching about objects.  For example, people start with simple objects and move to more complex ones.  They structure the session in chunks focusing on one affordance at a time.  And they lead the robot to significantly more positive experiences (where something happens) compared to non-social self exploration.  This is particularly useful when an object’s affordance is a rare event (e.g., opening the lid of a box).

2) We showed that Junior successfully used transparency to mitigate errors.  We showed that Junior could use eye gaze to communicate the need for help.  And that people all interpreted this gesture and helped the robot in an appropriate way.  This in turn significantly sped up the interaction.

3) Finally, we present the impact of “social” versus “non-social” data sets when training SVM classifiers.  In particular, we see the impact of people’s propensity to focus on rare events.  Classifiers trained with socially collected datasets are much better at predicting rare effects compared to non-social datasets.

June 21st, 2009 Posted by | Conferences, HRI | no comments

Enter your password to view comments.

No Comments »

No comments yet.

Leave a comment