The work will focus on HRI for robots that learn interactively from human teachers. This will involve algorithm development, robot control implementation, and system evaluation with human subjects. The experience will include working with undergraduate, MS and PhD students, and with interdisciplinary faculty collaborators.
Applicants should have a Ph.D. in Computer Science or a field clearly related to the research area above.
Qualified applicants should provide the following materials:
Cover letter briefly describing your background (including information about PhD institution, dissertation, and abstract) and career plans
Date of availability to start the postdoc
Names and contact information for at least three references including the PhD advisor,
Link to a research web site.
These documents should be submitted as a single PDF to Prof. Andrea L. Thomaz with the email subject line: “Postdoc Candidate”
The position is guaranteed for a year from the start date, with a possible second year extension. The position has been open since June 4. We are currently reviewing applications and hope to fill the position as soon as possible.
As some may have noticed, this blog tends to go through a hiatus from time to time, for one reason or another over the years. The hardest part of a hiatus is starting up again with that first post. It feels like it needs to be so significant, given how much time has passed. So, to get over this first-post-back syndrome, I’ve decided that surely the most important thing to happen in robotics and HRI since December was that Barbie’s 126th career will be Computer Engineer! Which is pretty darn close to a roboticist…they were asking for accessory suggestions, and I put in my vote for a robot dog.
The Robots podcast describes themselves as “the podcast for news and views on robotics. In addition to insights from high-profile professionals, Robots will take you for a ride through the world’s research labs, robotics companies and their latest innovations.”
Next summer, AAAI 2010 will be coming to Atlanta. I’m co-chairing the Robotics Exhibit with Monica Anderson. This is both an open exhibit for demonstrations of robotics research that intersects with AAAI, and demonstrations focused on specific challenge problems.
Each challenge is intended to be an experiment designed to motivate and evaluate an individual function of artificial intelligence for robotics, similar to the Semantic Robot Vision Challenge at AAAI-07.
This is the second year that Learning by Demonstration will be one of the topics. Last year we had open demonstrations of LbD systems. This year’s LbD event is being organized by Sonia Chernova, and folks are invited to optionally participate in a LbD challenge problem:
Optional challenge event in which all participants will perform an object relocation task that involves teaching the robot to move an object from one place to another. Each participating team will be provided with sample objects for practice in the weeks before the event. Due to differences in embodiment and learning algorithms, we expect to see a wide variety of approaches for performing the target behavior. A video showcasing the results will be compiled by the event organizers.
Applications for exhibitors aren’t due until later in the Spring, so plenty of time to get your learning robots ready for Atlanta!
Over the next few weeks/months we are organizing a series of guest bloggers here. Each guest has been asked to write one post in response to the question “So, Where’s My Robot?” Purposely vague, our goal is to see a variety of commentary about the important problems we have to work on before we’ll see everyday robots in the world.
Several interesting folks are lined up. And if there is someone that you would like to hear from please send email or write a comment!
Over the summer my lab has been working on getting our new robot, Simon, up and running. We are pretty excited that he was picked to be on the cover of the Tech Review this month, for the TR35 issue!
Simon is an upper-torso robot with a socially expressive head. We designed Simon specifically with the notion of side-by-side human robot interaction in mind. We worked with Aaron Edsinger and Jeff Weber of Meka Robotics on the torso, arms and hands. A key feature of this robot compared to others we considered using is the size. It has similar body proportions to a 5’7” woman, and thus the size should not be intimidating for a person working with the robot. Additionally, the arms are compliant, a key safety feature for side-by-side HRI.
Designing Simon’s head was an interesting challenge. Essentially, we started with the size and constraints of the torso/arms/hands and worked from there. Given a body of this size, what is an appropriate head size, where should the eyes be placed with respect to the head, what should the overall “character” of the robot be? To answer these questions we worked with Carla Diana, who is now at Smart Design in NYC and was a visiting professor in Industrial Design at Georgia Tech last year. Over a few months (and lots of small scale prototyping on a 3D printer!) we arrived at the final Simon character. The face shape and feature proportions were chosen to reflect youth. Given that our research centers around learning, and people teaching the robot, we wanted the character of the robot to help set expectations about the level of intelligence.
Additionally, the robot has some non-human degrees of expression in the ears, which can move up/down, can rotate, and can change color (using an array of RGB LEDs behind a translucent plate). The design idea behind this is similar to another robot that I worked with, where having it be a non-recognizable creature helps to reduce the prior expectations that people will have when the begin interacting with the robot. For example, if it doesn’t speak that makes sense, but if it speaks that seems reasonable too. And getting away from the completely humanoid form helps to avoid the uncanny valley.
It is exciting to see Simon starting to come to life–we have several projects underway working on endowing him with some social learning skills, stay tuned for more on that over the next few months.
I’m running one of the AAAI Spring Symposia this year, Agents that Learn from Human Teachers, along with Cynthia Breazeal, Sonia Chernova, Dan Grollman, Charles Isbell, Fisayo Omojokun, and Satinder Singh.
Submissions are due by Oct. 3.
The symposium aims to bring together a multi-disciplinary group of researchers to discuss how we can enable agents to learn from real-time interaction with an everyday human partner, exploring the
ways in which machine learning can take advantage of elements of human-like social learning.
Topics of interest include, but are not limited to:
–ML for interactive, real-time learning
–supervised and semi-supervised learning approaches
–active learning approaches
–feature selection techniques
–methods for improving beyond the observed performance of the teacher based on the agent’s own successes and failures
It is geared toward people thinking about robots and software agents that learn with human input. And in addition to the AI and Machine Learning crowd, the organizing committee and I are looking to have a good representation of folks from developmental psych and social psych, to really address the human-side of the teaching-learning equation.
Well, I fell off the wagon last semester, but it’s time to dust off the blog and get things going again.
It’s been a fun, exciting, and hectic first year at Georgia Tech. I taught a new grad course in the Spring Designing the course was interesting in and of itself, a seminar on Human-Robot Interaction. Given that the field of HRI is still defining itself, the topic list for a course on HRI is pretty up in the air. So, the course reflects my particular interests in HRI and has an AI and Cognitive Science bent. I chose a few broad categories of social intelligence and for each category we did a survey of readings related to what we know about how this capabilities arises in humans, and then coupling that with readings on state-of-the-art approaches to implementing these kinds of capabilities on robots. For example: understanding intentions, social learning, teamwork, empathy and emotions.
My other main focus this year has been getting together a social robot platform for my research group. I’m working with a company, Meka Robotics, who are building us an upper torso robot with two arms and three finger hands. And a group at the Georgia Tech Research Institute is building us a socially expressive head. It’s been a really fun and interesting process, stay tuned for more details as we start to get the hardware up and running.