So, Where’s My Robot?

Thoughts on Social Machine Learning


I recently taught a two-week introduction to HRI for our new crop of Robotics PhD students.   It was an interesting exercise, to boil down some of the main topics and issues of HRI into six lectures/meetings.   My goal was to communicate how some traditional robotics problems become conceptually different when you add a human-in-the-loop. And that there are some fundamental human abilities that make up social intelligence.

One amazing human capability is our propensity to recognize goal-directed motion. Whereas a computer vision algorithm struggles to parse a stream of video and determine whether there are people in it and what they might be doing, humans do this naturally from a very young age.

I think one of the most interesting findings in this realm for a roboticist is the work with infants that shows some of the principles of recognizing goal-directed action. Csibra performed a series of experiments with infants and children looking at the how they interpret simplified intentional action represented by geometric shapes, something like this famous Heider-Simmel video.

People watch this video and see a complex series of social dynamics and goal-directed actions. Most of the state of the art ways that robots use computer vision would generally see this as a bunch of pixels moving around. What this says to me is that perhaps we need a completely new approach to activity recognition. In a standard approach you first find a human, then track the human and identify their moving body parts, and then compare the way the parts are moving around to pre-existing models of various human activities. But clearly if I can so easily attribute human intentional action to squares and triangles, there is a much simpler feature space in which robots should be reasoning about intentional action.

I’m excited about the work of Dare Baldwin at the University of Oregon, she is working to uncover what low-level spatio-temporal features infants might be attending to when they correctly interpret intentional action.  Which at the very least can provide inspiration if not a detailed roadmap for building some intentional action recognition into our social robots.

December 10th, 2009 Posted by | HRI | no comments