So, Where’s My Robot?

Thoughts on Social Machine Learning

Hello from Atlanta

Class is in session….well I’m not teaching until next semester, but I have officially started my job at Georgia Tech. I think that makes me a Ramblin’ Wreck, but I’m still figuring it all out.

You can visit my new digital home, which has my new contact info and not much more… hopefully it will get an update soon!

August 20th, 2007 Posted by | Announcements | no comments


Empathy: The ability to understand and share the feelings of another. It’s an important concept for social robots because it is the foundation of any human interaction. I can’t see what’s going on in your brain, so in order to interact with you, I have to infer your mental states based on my own perceptions and experiences.

This philosophy of Theory of Mind is known as “Simulation Theory,” and the neural basis of this theory is the mirror neuron. Until recently these neurons had only been specifically identified in lab monkeys. But this week an article in the Science Journal reports on Dr. Iacoboni’s work at UCLA, published in a PLoS ONE article.

“Preliminary data from unpublished experiments this spring suggest that researchers at UCLA, probing the exposed brain tissue of patients undergoing neurosurgery, for the first time have isolated individual human brain cells that act as mirror neurons…”

Mirror neurons can be thought of as “subconscious seeds of social behavior…Located in the brain’s motor cortex, which orchestrates movement and muscle control, the cells fire when we perform an action and also when we watch someone else do the same thing. When someone smiles or wrinkles her nose in distaste, motor cells in your own brain associated with those expressions resonate in response like a tuning fork, triggering a hint of the feeling itself.”

Robotics researchers have recently been inspired by simulation theory in thinking about action understanding and imitation. From an engineering standpoint the idea of repurposing generative mechanisms for recognition is elegant. Some of the open questions I think are interesting: how to do the mapping of perception to action; how to do both action generation and inference at the same time, or at least multi-tasked in a way that signals don’t get crossed; among others.

August 20th, 2007 Posted by | In the News, Situated Learning | no comments

So, Where’s My Alien?

NYTimes Magazine

Sunday July 29, the NYTimes Magazine ran a story, “The Real Transformers” written mostly about robotics research going on at MIT.

The writer, Robin Marantz Henig, came to MIT for 2-3 days and interviewed several people and robots in Cynthia’s group at the Media Lab and in Rod Brooks group at the AI Lab.

The tone of the article is very negative. Robin is not a technologist and her article really reads like a trip report, “My visit with the robots at MIT” …and in a nutshell she was disappointed that we haven’t been able to build R2D2 or C3PO yet. Sorry Robin, still working on it!

But fear not all you hard working roboticists…Rod Brooks did get the last word! Monday after the article, NPR did a follow-up interview with Robin and Rod. In response to her disappointment he says that he’s had this problem for years. People watch sci-fi movies and wonder why we don’t have robots like that….but they never go around wondering where the aliens are.

On a more serious note, I was glad that Rod had the chance to correct one of the important misconceptions of Social Robots. Robin came away with the idea that we design human-like interaction abilities into our robots in order to “trick” people. But this is just not true. The theoretical standpoint of humanoid social robotics is that (1) Using human-like social interaction techniques will be more natural for people. Thus, they won’t have to learn how to interact with the machine. (2) Our world was built for humans, thus having a human-like size and shape is going to be the most efficient for getting things done in the world from an engineering point of view. A good example here is eye gaze behavior. The ways that humans orient their eye gaze and saccade around the scene is really a good way to do perception. It happens to also make the robot look very life-like, but this shouldn’t be the primary goal.

August 20th, 2007 Posted by | In the News | no comments