Over the next few weeks/months we are organizing a series of guest bloggers here. Each guest has been asked to write one post in response to the question “So, Where’s My Robot?” Purposely vague, our goal is to see a variety of commentary about the important problems we have to work on before we’ll see everyday robots in the world.
Several interesting folks are lined up. And if there is someone that you would like to hear from please send email or write a comment!
Over the summer my lab has been working on getting our new robot, Simon, up and running. We are pretty excited that he was picked to be on the cover of the Tech Review this month, for the TR35 issue!
Simon is an upper-torso robot with a socially expressive head. We designed Simon specifically with the notion of side-by-side human robot interaction in mind. We worked with Aaron Edsinger and Jeff Weber of Meka Robotics on the torso, arms and hands. A key feature of this robot compared to others we considered using is the size. It has similar body proportions to a 5’7” woman, and thus the size should not be intimidating for a person working with the robot. Additionally, the arms are compliant, a key safety feature for side-by-side HRI.
Designing Simon’s head was an interesting challenge. Essentially, we started with the size and constraints of the torso/arms/hands and worked from there. Given a body of this size, what is an appropriate head size, where should the eyes be placed with respect to the head, what should the overall “character” of the robot be? To answer these questions we worked with Carla Diana, who is now at Smart Design in NYC and was a visiting professor in Industrial Design at Georgia Tech last year. Over a few months (and lots of small scale prototyping on a 3D printer!) we arrived at the final Simon character. The face shape and feature proportions were chosen to reflect youth. Given that our research centers around learning, and people teaching the robot, we wanted the character of the robot to help set expectations about the level of intelligence.
Additionally, the robot has some non-human degrees of expression in the ears, which can move up/down, can rotate, and can change color (using an array of RGB LEDs behind a translucent plate). The design idea behind this is similar to another robot that I worked with, where having it be a non-recognizable creature helps to reduce the prior expectations that people will have when the begin interacting with the robot. For example, if it doesn’t speak that makes sense, but if it speaks that seems reasonable too. And getting away from the completely humanoid form helps to avoid the uncanny valley.
It is exciting to see Simon starting to come to life–we have several projects underway working on endowing him with some social learning skills, stay tuned for more on that over the next few months.
A fun video from Willow Garage last week. The last couple of years they have been getting an army of interns over the summer. When I visited last summer, I was told that their staff doubled when the interns showed up! Not sure if that was true this summer as well, but it does look like the interns had some fun with the PR2 in the 3 day Intern Challenge.
The challenge was a waiter task: take a drink order, deliver the drink, pick up the mess. Both teams used a mixture of autonomous behavior and teleoperation, and it’s not completely obvious from the video what is autonomous or not. One of the teams created a more humorous interaction that was fun for the audience. I’m not sure if they thought of it this way, bot others have suggested that self-deprecating humor can be a good interaction technique. It lowers the user’s expectations in a way that is not disappointing.
It looks like the most challenging HRI task is the object handoff, it was awkward every time. The human didn’t know if they were supposed to wait for the object to come to them, or meet the robot halfway. Larry Page was looking at the hand, and waiting, and then looking to a person near the camera, and then he finally got his drink. It looks like the robot should expect people to be helpful and meet it halfway (at least), especially if its moving slowly. This and some simple force sensing to tell if the objet in hand is being manipulated would be fun to try and see if the handoff works a little better.
New Scientist covered the recent IJCAI Robotics Event. One of the themes of the workshop and event was maximizing the potential for AI research and Robotics research to co-mingle. A major challenge in this respect is a lack of “out of the box” hardware platforms and software architectures for people to play around with. In the software department, people are talking about this as the need for an Operating System for robots.
This is what Microsoft would like Robotics Developer Studio to be. Willow Garage would like to see standardization around ROS. And as they mention on their wikipage, there have been so many of these kinds of projects in the past aiming for standards in the robot software/hardware interface.
I think it will be great when any of these projects gets the kind of critical mass that will create a standardization around it. To some extent, I don’t even care which one it is, but I do feel that an open source solution like ROS is going to be the most successful. The community of developers that need a Robot OS right now are definitely not waiting for someone else to deliver what they need. They are currently rolling their own solutions and the best way to create a standard is to direct that collective energy towards the same end.
The New Scientist article points to some of the current barriers to a standard OS, each robot out there has unique hardware and is often designed for a specific purpose and its software is optimized for that purpose.
In addition to this, when I’ve had conversations with people about standardization, the biggest barrier seems to be that everyone has their way of getting things done now that works for them, and they’d prefer everyone standardize to their way. But in the end, as said well by Chad Jenkins and Brian Gerkey, the frustration of endless re-implementation will eventually drive us to standardize.