So, Where’s My Robot?

Thoughts on Social Machine Learning

Simon@CHI

We are excited about how well Simon did at the CHI 2010 Interactive Demo session last week.  Our demo got a lot of traffic, especially during the opening event on Monday evening, and even got some coverage on PC World (who did the video below), engadget, and NPR.

This was Simon’s first venture out of the lab, so it has been interesting for forcing us to do more on-board perception, and generally putting the platform through its paces.  We were doing an interactive learning demo, using active learning, where Simon learns a “cleaning up” task.  The human teacher provides examples of what goes where.   Simon uses these examples to build a model of what goes in each location.  The teacher can also ask Simon if he has any questions, and the robot will point to or pick up an object that it is least certain about where it goes.  In addition to learning, we added perception capabilities to give Simon more ambient awareness of the environment, with an attention mechanisms that pays attention to visually salient cues as determined by the Itti saliency model, as well as faces, and loud sounds.

Simon got to interact with hundreds of people, and was particularly popular with kids of the conference attendees.

And also got to meet the designer of his head shells, Carla Diana, who finally got to see the finished product in person.

April 18th, 2010 Posted by | Conferences, GT Lab Updates, HCI, HRI | no comments

Should your robot have a face?

Hyun Lee recently defended her thesis at the MIT Media Lab, exploring the thought provoking topic of “storied objects”. Her work is centered on the design of everyday objects that can record their experience in the world, and summarize that into a story. Her most extensive example is a park bench that records audio and then later can segment and splice together it’s history of interactions into a summary of both the ambient and novel sounds it has heard.

This leads to several design questions about these artifacts. One that is particularly relevant to social robots is whether or not “storied objects” should have a human form and a human story? Or are they new entities altogether with a different perspectives and a different voices? In the park bench example, she explicitly did not want the object to have a human form or a human story, and designed the bench to have a park bench oriented story, put together from the events that happened to the bench.

In the field of social robots there is a lot of question over how human-like the physical form and the behavior of a robot should be. A design perspective highlights the importance of choices of form and how this has to be tightly integrated with function.

Why (or why not) build your robot to be human-like or anthropomorphic?

One practical reason to build a human-like robot is that the artifacts of our world are built for a human form, so for example a robot with human-like hands will be well suited to use everyday tools in the kitchen or on a construction site.

Additionally, the form of the robot communicates the capabilities of the robot to the human partner interacting with it. People have a tendency to anthropomorphize, talking about and treating objects like social actors, even basic computers. If the robot has a face, or eyes, or other human-like features, it will be easy for a human partner to anthropomorphize, allowing it can take advantage of human-like nonverbal communication. But the catch: then you HAVE to implement these skills…e.g., your robot shouldn’t have human-like eyes if it’s not going to use them to communicate in the way that people interacting with it will expect.

In terms of a robot that learns from human partner, the form is one way that the robot designer can communicate to a human teacher what the robot is able to learn, and perhaps more importantly the boundaries of that ability. Importantly, the addition of anthropomorphic features can allow the robot learner to transparently communicate its internal state of the learning process in a social way that a human teacher can intuitively interpret.

An important tenant of design is that form should follow function. Don Norman has three really interesting books about form and function of technology in particular, his most recent one is Emotional Design which has some discussion of social robots and emotions as a transparency device. The main problem that human-looking robots will face for many years to come, is that we are no where close to human-level cognitive abilities or even just manipulation skills. Thus, making a human-looking robot form sets up unreasonable expectations of human-level function, and the human user interacting with such a robot will very quickly be disappointed.

September 15th, 2006 Posted by | HCI | 3 comments

Ok, How about a Virtual Robot?

ICT Virtual Humans Sophie's Kitchen Symon NERO

There are a number of reasons that virtual agents (as opposed to physical robotic agents) are interesting for research on social machine learning.

* All the human interaction, without the mess. Now that’s not exactly true. But AI research with virtual agents or video game characters seems like a great way to focus on the problems of social intelligence without having to solve other hard problems first (computer vision, navigation, manipulation, etc.). These virtual agents have to be able to intelligently interact with real human players, and many relevant issues of embodiment are still there in the virtual domain. For example, communicating attentional focus with a human partner.

* Video games are popular! So there’s a real opportunity to build AI characters (NPCs: non-player characters) and put them to the test with real people. In particular, MMOs (massively multiplayer online games) are a huge opportunity to develop an AI character in an environment where social interaction and social intelligence are a number one priority.

* …and oh yeah, it has to be fun! So, the issue of engagement cannot be ignored. In order for the virtual agent to learn something new from a human player, the human player has to enjoy the interaction, it has to be rewarding and fun to teach. Developing such engagement strategies is important and will hopefully transfer to the robot learning agents as well.

I was just at this conference, Intelligent Virtual Agents (IVA 2006), which prompted me to articulate the research questions that Social Machine Learning can tackle in the virtual domain

The conference didn’t have much on social learning agents in particular, but a lot of interesting work on socially interacting agents. A few of the most interesting topics included: emotionally intelligent behavior, recognizing human behavior, how humans perceive the behavior of a virtual agent, generating gestures and nonverbal behavior.

This was my first year to attend IVA and I was struck by the similarity of research questions in this virtual domain to those we are tackling in the Human-Robot Interaction domain as well. Both fields would do well to have a bit more cross pollination. Then we could start getting some of these robotic agents into game worlds and some of these game agents into the real world…fun!

August 30th, 2006 Posted by | Conferences, HCI | no comments

Social ML at AAAI

AAAIThis week the American Association of Artificial Intelligence (AAAI) had their annual conference in Boston. There was much to see and lots of reflection on the first 50 years of the field of AI. More than last year’s conference in Pittsburgh, this year I found several papers and talks relevant for those of us interested in Social Machine Learning. Here’s the social learning reading list of AAAI-06:

Humans teaching software game agents:

“Real-Time Evolution of Neural Networks in the NERO Video Game”, Ken Stanley, Bobby Bryant, Igor Karpov, Risto Miikkulanen

“A Simple and Effective Method for Incorporating Advice into Kernel Methods”, Rich Maclin, Jude Shavlik, Trevor Walker, Lisa Torrey

“Reinforcement Learning with Human Teachers”, Andrea Thomaz, Cynthia Breazeal

Personal desktop assistants that learn preferences of a user both implicitly and explicitly. There were several papers on the CALO personal assistant project, here’s a couple I found most interesting:

“Extracting Knowledge about Users’ Activities from Raw Workstation Contents”, Tom Mitchell, Sophie Wang, Yifen Huang, Adam Cheyer

Karen Myers gave an overview keynote, “Developing an Intelligent Personal Assistant”, in which she pointed to some learning aspects of the project (Tailor, LAPDOG, PLOW)

There were several sessions on robotics, and even a whole session on Human-Robot Interaction. Which is great compared to the robotics and HRI representation last year. We had a paper on HRI and robot learning:

“Perspective Taking: An Organizing Principle for Learning in Human-Robot Interaction”, Matt Berlin, Jesse Gray, Andrea Thomaz, Cynthia Breazeal

One of the themes on Thursday was Intelligent Tutoring Systems. Which is the reverse problem of Social Machine Learning, but relevant as it deals with a learning interaction between a human and a machine.

“Cognitive Tutors and Opportunities for Convergence of Human and Machine Learning Theory”, Ken Koedinger’s keynote talk

“Classifying Learner Engagement through Integration of Multiple Data Sources”, Carole Beal, Lei Qu, Hyokyeong Lee

These don’t deal with learning, but are a couple of interesting papers on creating believable behavior in software agents that are meant to interact with people.

“Virtual Humans”, Bill Swartout (AI Magazine article)

“Using Anticipation to Create Believable Behavior”, Carlos Martinho and Ana Paiva

July 21st, 2006 Posted by | Conferences, HCI, Machine Learning | no comments

Why do machines need Social Learning?

As an introductory post, I thought I’d give a little motivation for Social Machine Learning and explain why it is one of the key challenges that stands in the way of Robots for Everyone…

By Robots for Everyone we’re talking about the commonsense version of robotics that the average consumer has in mind. Books and movies have long told us what these robots are going to do for us (clean our houses, do our chores, make our lives easier). But perhaps more importantly, these stories have also been about how these machines are supposed to interact with us and fit into our world. Thus, the average consumer expects that in the future they will have robots that will be able to communicate, cooperate, collaborate, and generally coexist in our human culture.

Several realms of academia and industry are actively at work toward the goal of consumer robotics, and future posts will go in to detail about some of these endevours. However, a key problem remains unsolved: social learning will be crucial to the successful application of robots in everyday human environments. Imagine a company building a ‘Helpful Assistant Robot’. It will be impossible for the company to pre-program at the factory all of the knowledge and skills these machines will need to be useful. What the ‘Helpful Assistant Robot’ really needs is the ability to learn new skills and tasks from everyday people, not robotics experts.

While recognizing the success of current machine learning techniques over the years, these techniques have not been designed for learning from non-expert people and are generally not suited for it ‘out of the box’. And, designing algorithms that are well-suited for human interaction is not generally a topic of standard Machine Learning (ML) research. Human interaction with technology is studied in entirely different fields (Human-Computer Interaction (HCI) and Human-Robot Interaction (HRI)).

And this brings us to Social Machine Learning. We, as a research community, need to bridge the gap between ML research and HCI/HRI research. My belief is that Machine Learning and Human-Machine Interaction can be mutually beneficial. The ability for a machine learning system to utilize and leverage social interaction should be thought of as more than just a good interface technique for people; it can positively impact the underlying learning mechanisms to let the system succeed in a real-time interactive learning session.

July 12th, 2006 Posted by | HCI, Machine Learning | no comments