So, Where’s My Robot?

Thoughts on Social Machine Learning

Learning Goals

What is a Social Machine Learner actually meant to learn… As humans, we are fundamentally wired to interpret the actions of other people in goal-oriented ways, which really helps speed up teaching/learning. People are really good at seeing an example of something and abstracting out what the “point” was. Studies have been done showing how kids will learn the correct goal of a new task even when they never saw a correct demonstration of it (Meltzoff).

Given that their social partners will act and interpret action in intentional and goal-oriented ways, an Social Machine Learning system will need to continually work to refine the concept of what the human partner is meaning to communicate, and what the activity is about.

Csibra’s theory of human action understanding give some inspiration about how our social machines will have to interpret their human partners. In the theory, activity has the representation [context][action][goal], and a series of experiments with infants finds that they have efficiency expectations with respect to each of these three (Csibra 2003). For instance, given a goal and a context infants expect the most efficient action to be used (and are surprised when it is not); the experiments show the ability to infer goal and context in a similar fashion. In one experiment, 9-12 month old infants were repeatedly shown animations of a ball jumping over an obstacle to reach and contact a second ball. In this case the jumping action is instrumental to the goal (contacting the second ball). After habituating to this animation the infants are shown the test configuration where the obstacle is gone. In one test condition infants are shown an animation where the approaching ball does the same jumping action to reach the other ball, and in the second test condition the approaching ball makes the more efficient straight-line approach to the other ball. Using looking time as a measure of broken expectations, Csibra found that infants were using a goal-oriented interpretation. Despite habituating to the jumping action, in the test configuration infants preferred the new instrumental straight-line action to the now unnecessary jumping action. Thus the ability to understand the goal of observed actions

This kind of “efficiency” representation would be great for Social Machine Learners because it leads to a reasonable generalization of activity across contexts. For instance, if the system is always trying to build a better model of the [context] component of an activity representation, this will lead to the ability to say, “this looks like the kind-of-situation where I do X” or abstracted even further “I feel like doing X.” Also, this representation implies the flexibility to learn multiple ways to accomplish the same goal.

This is VERY different than most Machine Learning examples out there. Often systems are designed to learn a particular thing at a particular time. The goal is defined by the designer, either in the nature of the training data or the description of an optimization criteria, etc. Or in many systems there is an implicit goal to learn a “complete” model of the environment.

How do we bridge this gap, to make machine learners able to flexibly learn new goals (concepts) from interacting with a non-expert human partner? I’ve been trying to tackle various aspects of this question in the systems I’ve been building for the past few years now. I think this is a good example of an aspect of Social Machine Learners where the “Social” element has to be a fundamental part of the system, not just a nice interface that is slapped on at the end. The machine needs to understand and represent the world in social (goal-oriented) ways in order to learn in the way that a social partner is going to expect.

Do you know of a system/algorithm that you believe is actually learning a new goal that it wasn’t specifically designed to learn? Leave a comment!

November 9th, 2006 Posted by | Machine Learning, Situated Learning | 2 comments