So, Where’s My Robot?

Thoughts on Social Machine Learning

AAMAS 2010 best paper goes to Social Learning

Socially-Guided Machine Learning was the topic of this year’s AAMAS best student paper: “Combining Manual Feedback with Subsequent MDP Reward Signals for Reinforcement Learning” by Brad Knox and Peter Stone.

In the paper they investigate using a human reward signal in combination with environmental rewards for a reinforcement learning agent. In particular they analyze eight different ways to combine these two reward signals for performance gains.  This makes an important contribution in formalizing the impact of social guidance on a reinforcement learning process.


As learning agents move from research labs to the real world, it is increasingly important that human users, including those without programming skills, be able to teach agents desired behaviors. Recently, the tamer framework was introduced for designing agents that can be interactively shaped by human trainers who give only positive and negative feedback signals. Past work on tamer showed that shaping can greatly reduce the sample complexity required to learn a good policy, can enable lay users to teach agents the behaviors they desire, and can allow agents to learn within a Markov Decision Process (MDP) in the absence of a coded reward function. However, tamer does not allow this human training to be combined with autonomous learning based on such a coded reward function. This paper leverages the fast learning exhibited within the tamer framework to hasten a reinforcement learning (RL) algorithm’s climb up the learning curve, effectively demonstrating that human reinforcement and MDP reward can be used in conjunction with one another by an autonomous agent. We tested eight plausible tamer+rl methods for combining a previously learned human reinforcement function, H, with MDP reward in a reinforcement learning algorithm. This paper identifies which of these methods are most effective and analyzes their strengths and weaknesses. Results from these tamer+rl algorithms indicate better final performance and better cumulative performance than either a tamer agent or an RL agent alone.

October 21st, 2010 Posted by | Conferences, Machine Learning | 3 comments

Enter your password to view comments.


No comments yet.

Leave a comment