Charlie Kemp and his students took Georgia Tech’s PR2 down to the CNN studios last week for a live demo! They showed off some RFID assisted manipulation, the robot autonomously drove up and delivered a pill bottle to the newscaster. Their demo set up some comments from Willow Garage about the future of personal robotics, where robots are going to take over our repetitive tasks to free up our time for creative human endeavors. When asked when the PR2 or other such robots are going to be affordable for everyday folks, Keenan Wyrobek says its not 20 years out, but still a couple years away.
Last week I attended the IEEE International Conference on Development and Learning, held at the University of Michigan. This is an interesting conference that I’ve been going to for the past few years. It’s goal is to very explicitly mingle researchers working on Machine Learning and Robotics with researchers working on understanding human learning and development.
My lab had two presentations
- “Optimality of Human Teachers for Robot Learners” (M. Cakmak, A. L. Thomaz): Here we take the notion of teaching in Machine Learning Theory, and analyze the extent to which people teaching our robot are adhering to theoretically optimal strategies. Turns out they teach about positive examples optimally, but not negative. And we can use active learning in the negative space to make up for people’s non-optimality.
- “Batch vs. Interactive Learning by Demonstration” (P. Zang, R. Tian, A.L. Thomaz, C. Isbell): We show the computational benefits of collecting LbD examples online rather than in a batch fashion. In an interactive setting people automatically improve their teaching strategy when it is sub-optimal.
And here are some cool things I learned at ICDL.
Keynote speaker, Felix Warneken, gave a really interesting talk about the origins of cooperative behavior in humans. Are people helpful and good at teamwork because you learn it, or do we have some predisposition? His work takes you through a series of great experiments with young children, showing that helping and cooperation are things we are at least partly hardwired to do.
Chen Yu, from Indiana, does some really nice research looking into how babies look around a scene, and how this is different than adults or even older children. They do this by having them wear headbands with cameras, then they can do some nice correlations across multiple video streams and audio streams to analyze the data. For younger children, visual selection is very tied to manual selection. And the success of word learning is determined by the visual dominance of the named target.
Vollmer et al, from Bielefeld, did an analysis of their motionese video corpus, and showed the different ways that a child learner gives feedback to an adult teacher. Particularly that this changes from being dominated by gaze behaviors, to more complex anticipatory gestures between the ages of 8mo to 30 mo.
Several papers touched on the topic of Intrinsic motivation for robots, as inspired by babies and other natural learners. Over the past few years there has been growing interest in this idea. People have gone from focusing on curiosity and novelty, to competence and mastery. There were papers on this topic from Barto’s lab, and from Oudeyer’s. The IM CLeVeR project was also presented, this is a large EU funded collaboration that aims to address intrinsic motivation for robots.
The Socially Intelligent Machines Lab at Georgia Tech is looking for a postdoc.
The work will focus on HRI for robots that learn interactively from human teachers. This will involve algorithm development, robot control implementation, and system evaluation with human subjects. The experience will include working with undergraduate, MS and PhD students, and with interdisciplinary faculty collaborators.
Applicants should have a Ph.D. in Computer Science or a field clearly related to the research area above.
Qualified applicants should provide the following materials:
- Cover letter briefly describing your background (including information about PhD institution, dissertation, and abstract) and career plans
- Date of availability to start the postdoc
- Names and contact information for at least three references including the PhD advisor,
- Link to a research web site.
These documents should be submitted as a single PDF to Prof. Andrea L. Thomaz with the email subject line: “Postdoc Candidate”
The position is guaranteed for a year from the start date, with a possible second year extension. The position has been open since June 4. We are currently reviewing applications and hope to fill the position as soon as possible.
Robots like the PR2 may be able to help older adults stay in their homes longer with a high quality of life. The Georgia Tech team aims to make progress towards this long-held dream. Rather than try to guess what seniors want, the team will work with older adults to better understand their needs and how robots can help. The team will also write code to make the PR2 perform helpful tasks at home. By working closely with seniors throughout the research process, the team hopes to better meet real needs and accelerate progress. To make everything more realistic, the robot will spend some of its time in a real, two-story house on the Georgia Tech campus, called the Aware Home.
They will be doing a spotlight for each of the eleven PR2s heading off to research labs this summer. The robots were sent off with quite the fanfare, this video (via IEEE Spectrum) captures the “graduation” event nicely, including a brief interview with someone from each team. The group as a whole is tackling a wide variety of personal robotics challenge problems!
We are excited about how well Simon did at the CHI 2010 Interactive Demo session last week. Our demo got a lot of traffic, especially during the opening event on Monday evening, and even got some coverage on PC World (who did the video below), engadget, and NPR.
This was Simon’s first venture out of the lab, so it has been interesting for forcing us to do more on-board perception, and generally putting the platform through its paces. We were doing an interactive learning demo, using active learning, where Simon learns a “cleaning up” task. The human teacher provides examples of what goes where. Simon uses these examples to build a model of what goes in each location. The teacher can also ask Simon if he has any questions, and the robot will point to or pick up an object that it is least certain about where it goes. In addition to learning, we added perception capabilities to give Simon more ambient awareness of the environment, with an attention mechanisms that pays attention to visually salient cues as determined by the Itti saliency model, as well as faces, and loud sounds.
Simon got to interact with hundreds of people, and was particularly popular with kids of the conference attendees.
And also got to meet the designer of his head shells, Carla Diana, who finally got to see the finished product in person.
A busy day packing up the robot…
A short ride over to the Hyatt Regency…
And Simon says, “See you tomorrow at CHI 2010!!”
Over the summer my lab has been working on getting our new robot, Simon, up and running. We are pretty excited that he was picked to be on the cover of the Tech Review this month, for the TR35 issue!
Simon is an upper-torso robot with a socially expressive head. We designed Simon specifically with the notion of side-by-side human robot interaction in mind. We worked with Aaron Edsinger and Jeff Weber of Meka Robotics on the torso, arms and hands. A key feature of this robot compared to others we considered using is the size. It has similar body proportions to a 5’7” woman, and thus the size should not be intimidating for a person working with the robot. Additionally, the arms are compliant, a key safety feature for side-by-side HRI.
Designing Simon’s head was an interesting challenge. Essentially, we started with the size and constraints of the torso/arms/hands and worked from there. Given a body of this size, what is an appropriate head size, where should the eyes be placed with respect to the head, what should the overall “character” of the robot be? To answer these questions we worked with Carla Diana, who is now at Smart Design in NYC and was a visiting professor in Industrial Design at Georgia Tech last year. Over a few months (and lots of small scale prototyping on a 3D printer!) we arrived at the final Simon character. The face shape and feature proportions were chosen to reflect youth. Given that our research centers around learning, and people teaching the robot, we wanted the character of the robot to help set expectations about the level of intelligence.
Additionally, the robot has some non-human degrees of expression in the ears, which can move up/down, can rotate, and can change color (using an array of RGB LEDs behind a translucent plate). The design idea behind this is similar to another robot that I worked with, where having it be a non-recognizable creature helps to reduce the prior expectations that people will have when the begin interacting with the robot. For example, if it doesn’t speak that makes sense, but if it speaks that seems reasonable too. And getting away from the completely humanoid form helps to avoid the uncanny valley.
It is exciting to see Simon starting to come to life–we have several projects underway working on endowing him with some social learning skills, stay tuned for more on that over the next few months.
Some work out of my lab on social robot learning was recently presented at ICDL in June (and got *best student paper!*) and follow-up work is going to be presented at RO-MAN in a couple of months. Here’s an overview + video…
Social learning in robotics has largely focused on imitation learning. Here we take a broader view and are interested in the multifaceted ways that a social partner can influence the learning process. We implement four biologically inspired social learning mechanisms on a robot: stimulus enhancement, emulation, mimicking, and imitation, and illustrate the computational benefits of each. In particular, we illustrate that some strategies are about directing the attention of the learner to objects and others are about actions. Taken together these strategies form a rich repertoire allowing social learners to use a social partner to greatly impact their learning process. We demonstrate these results in simulation and with physical robot “playmates”. (Find links to the papers here)
- Robots, standup and be counted
- AAMAS 2010 best paper goes to Social Learning
- Kemp demos PR2 on CNN
- Google &hearts Robots
- Probabilities in Hardware
- ICDL 2010
- Robots 2012, Your Tax Dollars at Work
- Autom’s On the Way
- Don’t miss the AAAI 2010 Robotics Exhibit
- NYT on Robot Learning
- NYT on Robot Companions
- Friday Fun, Robot Babies Graphic
- Andrea L. Thomaz
- Socially Intelligent Machines Lab, Georgia Tech
- Personal Robots Group, MIT
- The Robot Report
- Service Robotics Blog
- Machine Learning (Theory)
- October 2010 (4)
- September 2010 (1)
- August 2010 (3)
- July 2010 (4)
- June 2010 (4)
- May 2010 (3)
- April 2010 (3)
- December 2009 (2)
- November 2009 (3)
- October 2009 (2)
- September 2009 (5)
- August 2009 (5)
- GT Lab Updates
- In the News
- Machine Learning
- Situated Learning
- SWMR Guest