So, Where’s My Robot?

Thoughts on Social Machine Learning

Satisfaction

satisfactionOne of my favorite topics these days is motivation. In trying to figure out what should motivate robot behavior, I find myself learning more about human motivation. I just finished a good book by Gregory Burns“Satisfaction: Sensation seeking, novelty, and the science of finding true fulfillment”.

Burns, an M.D. Ph.D at Emory, is a fun storyteller. The book details years of his research into the neural basis of satisfaction. What satisfaction do people get out of money, sex, food, exercise? Burns tackles these and many other questions about how the human brain experiences and evaluates the world around it.

Due to how my brain is wired, I read this and think, what does that mean for robots? So, here are some aspects of the neural basis of human satisfaction that I think should inspire designers of robots and learning machines.

Work for it
Satisfaction is directly linked to action. An fMRI experiment looked at people the brain’s response to receiving monetary rewards. People saw shapes on a screen and pressed a button when a triangle appeared. Randomly a $1 would appear on the screen to indicate they were receiving a reward. The group that had to press the button to transfer the $1 into their account had a bigger response than people that got a freebie. “Reward, at least as far as the brain is concerned, is in the action and not in the payoff.” Also mentioned is a study finding that rats prefer to work for rewards rather than get them for free.

So, a learning machine should seek out “rewards” but any good state that comes about for free is much less valuable than ones brought about by the machine’s own actions. This seems like a good way to focus a learner on aspects of the world they have control over. It forces us to focus on the entire integrated embodied system. Learning, reward, experience does not happen in a vacuum. Additionally, this finding argues against learning techniques that rely entirely on observation.

Keep the options open
This one comes from looking at how people reason about risk and the utility of money. “The buying of possibilities, and not the actual goods purchased, is what accounts for the allure of money. When you increase the number of options available to you, risk actually decreases….our brains seem to have a built-in bias to [this]. People prefer more choices.”

This indicates that a machine, as part of it’s decision making process, may need the ability to maximize expected future opportunity (available actions) in addition to expected future payoffs/rewards.

All about novelty
The intricate ways in which the brain deals with novelty is a major topic of the book. Essentially our brains constantly try to make sense of the world and predict the future. New information is the best way to be able to build better models and do a better job of predicting the future. So, the brain really really likes new information, good or bad! The striatum is the area of the brain that seems to play the biggest role here. It seems to determine the importance of all information that is encountered (reminds me of Damasio’s somatic marker hypothesis, though I’m not sure he theorized about the striatum in particular). It “lights up” on prediction of pain/pleasure indicating “something significant” is about to happen.

This says that people working on curiosity drives are going in the right direction, driving behavior based on the agent’s ability to predict outcomes in the world. I’m interested in what kinds of representations are really going to work for this. A robot can’t have a complete world model, so what representation is necessary to be able to do this kind of immediate overall assessment that we see in the striatum?

The Information Gap
Novel events can lead to either a retreat or explore response. Curiosity is when novel events trigger exploration, when the brain perceives an “information gap” between what it knows and what it wants to know. Then it follows naturally that the more you know the more you’re curious about…a sort of learning snowball effect. This is the real chicken/egg problem for machine learning, because you have to already know something in order to perceive this information gap.

Shared insight is more fun
Much of the book deals with an individual’s perceptions and experience. But he does mention briefly that social interaction adds another dimension to novelty. It seems that novelty encountered as part of a team is even more rewarding than flying solo. Shared insight, “you see what I see,” is exciting probably because it is more rare than novelty alone.

This is a subtle but interesting point for social robots. The shared experience is an important part of situated learning for human teachers and learners. So, I think that the ability for a machine to communicate to a human teacher: “hey something new just happened, and I noticed it…”, is going to be very important for the teacher’s ability to easily intervene in the learning process.

Burns sums up nicely in a concluding definition: “Satisfaction is the uniquely human need to impart meaning to one’s actions.” I’ll have to look him up once I’m in Atlanta, pick his brain about robot motivations. Maybe I’ll pretend I skipped the chapter about how he and his wife jazzed up their sex life (blush).

June 20th, 2007 Posted by | Machine Learning, Situated Learning | no comments