Charlie Kemp and his students took Georgia Tech’s PR2 down to the CNN studios last week for a live demo! They showed off some RFID assisted manipulation, the robot autonomously drove up and delivered a pill bottle to the newscaster. Their demo set up some comments from Willow Garage about the future of personal robotics, where robots are going to take over our repetitive tasks to free up our time for creative human endeavors. When asked when the PR2 or other such robots are going to be affordable for everyday folks, Keenan Wyrobek says its not 20 years out, but still a couple years away.
A recent article in Technology Review introduces some cool technology from Lyric Semiconductor, a probability-based processor.
The electrical signals inside Lyric’s chips represent probabilities, instead of 1s and 0s. While the transistors of conventional chips are arranged into components called digital NAND gates, which can be used to implement all possible digital logic functions, those in a probability processor make building blocks known as Bayesian NAND gates. … Whereas a conventional NAND gate outputs a “1” if neither of its inputs match, the output of a Bayesian NAND gate represents the odds that the two input probabilities match. This makes it possible to perform calculations that use probabilities as their input and output.
Sounds like their initial impact will be in the flash memory market, making error-checking faster and more efficient. But I can definitely see how this kind of hardware could have a major impact in Machine Learning and Robotics. Most of statistical machine learning has its roots in the kind of math that this processor is designed for. This could make reasoning about larger (more real-world) problems increasingly feasible.
Exciting news recently in the realm of robotics and public policy. Robotics is recommended as a Science and Technology Priority for the 2012 budget. The recent OSTP/OMB memo lists six challenge areas, the first of which is “Promoting sustainable economic growth and job creation,” and one of the three recommendations in this section is:
“Support R&D in advanced manufacturing to strengthen U.S. robotics, cyber-physical systems, and flexible manufacturing.”
Congratulations to Henrik Christensen and all of those in the robotics community that have worked hard over the past couple of years to educate the science and technology policy makers about the ways in which robotics research and development can have a positive impact on the U.S. economy and society.
Autom the weight loss coach has been on a media tour of late, running up to the first release later this year. She was looking good on a “Fox and Friends” segment this past weekend. And the great pic on the left is from some recent coverage in the Tech Review. You can also check the Robots podcast episode with Autom inventor, Cory Kidd, to hear more about the science behind their design of a social robot for healthcare.
Looks like you’ll be able to get your own Autom from Intuitive Automata for $500 in just a few months. Looking forward to seeing how people do with them!
I had a unique opportunity yesterday, I was invited to participate in a PCAST workshop (the President’s Council of Advisors on Science and Technology). The theme of the meeting was Bio/Info/Nano tech, what exciting opportunities are happening in these fields that will create jobs in the US, and what the government can do to spur innovation. I’ll probably have a couple of SWMR posts about the discussion, and thought I’d start off with one about my contribution to the discussion, since I was the only roboticist in the room.
They had a wide range of discussants, several of us were early career researchers, which I think were invited to share our “what’s new and exciting that’s going to create jobs” point of view. Another contingent of the discussion group were more seasoned researchers and entrepreneurs, that had a sort of from the trenches perspective of how the government’s support of basic research has changed over the years.
Each discussant had the opportunity in a three minute introduction to make a statement to the council. Here’s a recap of what I said:
The technology opportunity I decided to highlight is service robotics, because they have the potential to dramatically impact such a diverse set of societal needs. Robots that are capable of working alongside people will revolutionize workplaces, for example in manufacturing.
Robotics represents perhaps our best opportunity to achieve higher levels of domestic manufacturing agility and overall productivity needed to retain high-value manufacturing jobs in the U.S., provided that the current state of the technology can be significantly advanced.
Today’s industrial robots lack the capabilities required to do more than just blindly execute pre-programmed instructions in structured environments. This makes them expensive to deploy and unsafe for people to work alongside.
There is an opportunity to usher in a new era of agile and innovative manufacturing by developing service robots as co-workers in the manufacturing domain. These capable assistants would work safely in collaboration and close proximity to highly skilled workers. For example, providing logistical support, automatically fetching parts, packing/unpacking, loading, stacking boxes, emptying bins, detecting and cleaning spills.
Very similar logistical robotic support could help streamline the operation of hospitals, driving healthcare costs down.
In order to realize this vision, we need to move beyond robots only operating in relatively static structured environments. This presents several research challenges, and I think that the following three are most critical to progress.
– This requires advances in sensing and perception technology, allowing robots to keep track of a dynamically changing workplace.
– Manipulation is a key challenge as well, robots need the flexibility to be able to pickup and use objects in the environment without tedious pre-programming of specialized skills.
– Finally, an important challenge in bringing these robots to fruition is advances in human-robot interaction. We need these robots to work safely and efficiently in collaboration with human workers. People can’t just be seen as an obstacle for the robot to navigate around, the robot needs to reason about and understand people as interaction partners.
Recently, over 140 robotics experts across the country have come together to articulate a national robotics initiative, a robotics research roadmap. This roadmap lays out the target areas where we think robotics research efforts need to be supported in order to bring about robot technology that will have the biggest impact on our economy and our society.
The comment I got from one of the council members was interesting, she said (I’m paraphrasing) “Aren’t you leaving out the challenge of Sentience or AI needed?” I only had time for a short answer, and said something to the effect that, yes, I think that the notion of AI cuts across all of the three areas I mentioned, but particularly human-robot interaction. In order for a robot to work side-by-side with a human partner it will need human compatible intelligence capabilities.
But here on SWMR, I’ll give the longer answer….that, no I don’t think we need AI for service robots. Or I don’t think that’s what we should call it. Yes, perception and manipulation and HRI and autonomy in general all fit under the big umbrella term of AI. But the term AI is so vague, and it makes people think of science fiction, which then makes you feel like robots in society is some pipe dream far in the future. So, particularly in settings like PCAST where people want to hear about concrete objectives and job creation, it does our field no good to just lump everything under the term AI.
If instead we talk about the specific intelligence challenges suddenly it all seems much more achievable, and you can imagine some semi-autonomous form of service robots being deployed in the not so distant future. We see that, hey sensing technology is getting better and better, and look at all the academic and industrial partners working on the manipulation problem, that seems achievable. And in terms of AI for human-robot interaction, yes we need to make some significant advances in computational models of social intelligence before robots can truly interact with people in unstructured environments. But do we need to solve AI? I don’t think so.
There’s been a recent flurry of announcements of telepresence robots. QB is now available from the CA based Anybots (shipping Fall 2010). Texai is the WillowGarage platform (not yet for sale). And Vgo from startup, Vgo Communications, was recently covered in the Boston Globe.
Some visions of these platforms include: telecommuters logging in to interact with their co-workers at the office, or attending a meeting with co-workers in another city, or using it to check out a situation at the factory from the comfort of your office in another location. While you’re probably not going to buy one for personal use, (QB has a $15K price tag, and Vgo is about $6K) you might just start to see them roaming around your office.
Vgo and Texai look more like laptops on wheels, but QB has a more anthropomorphic presence that I think is nice. It’s adjustable height lets it get fairly tall, which is necessary for the telecommuter to get any reasonable view of the remote location, and allows it to participate in standing or sitting conversations. Yet it is small/skinny/light which makes it safe and easy to operate in close proximity to people. You drive it with the arrow keys on your keyboard, to make it go L,R,forward,back.
One thing I think it missing from all of these telepresence robots, but particularly from an anthropomorphic one like QB is a neck tilt. For utility purposes, I imagine it would be nice to scan up and down with the camera. But also for social purposes, it would be nice for the person to trigger head nods (and shakes if we could add another DOF). If this robot is going to be standing in for people in conversations, it could be awkward if it doesn’t display any backchannel communication. People will get used to the robot not doing it, but it’s presence in the conversation would be much stronger with a backchannel. These are the little “yeahs” and “uh-huhs” that people mutter to let the speaker know “I’m with you, keep going.” Much of this happens over the speech channel, which QB will capture, but we also use body language to communicate this back channel info to the speaker. Allowing QB to nod would let the telecommuter give both physical and verbal backchannel to the person they are speaking with.
I’m excited to see how people start using their Anybots and Vgos, the water cooler will never be the same! Here’s a video of QB in action.
Robots like the PR2 may be able to help older adults stay in their homes longer with a high quality of life. The Georgia Tech team aims to make progress towards this long-held dream. Rather than try to guess what seniors want, the team will work with older adults to better understand their needs and how robots can help. The team will also write code to make the PR2 perform helpful tasks at home. By working closely with seniors throughout the research process, the team hopes to better meet real needs and accelerate progress. To make everything more realistic, the robot will spend some of its time in a real, two-story house on the Georgia Tech campus, called the Aware Home.
They will be doing a spotlight for each of the eleven PR2s heading off to research labs this summer. The robots were sent off with quite the fanfare, this video (via IEEE Spectrum) captures the “graduation” event nicely, including a brief interview with someone from each team. The group as a whole is tackling a wide variety of personal robotics challenge problems!
Today Willow Garage announced the winners of their PR2 Beta Program CFP. After reviewing 78 proposals they selected 11 schools to receive a PR2, and our Georgia Tech team, headed up by Prof. Charlie Kemp, made the cut.
- Albert-Ludwigs-Universität Freiburg with the proposal TidyUpRobot
- Bosch with the proposal Developing the Personal Robotics Market.
- Georgia Institute of Technology with the proposal Assistive Mobile Manipulation for Older Adults at Home.
- Katholieke Universiteit Leuven with the proposal Unified Framework for Task Specification, Control and Coordination for Mobile Manipulation.
- MIT CSAIL with the proposal Mobile Manipulation in Human-Centered Environments.
- Stanford University with the proposal STAIR on PR2.
- Technische Universität München with the proposal CRAM: Cognitive Robot Abstract Machine.
- University of California, Berkeley with the proposal PR2 Beta Program: A Platform for Personal Robotics.
- University of Pennsylvania with the proposal PR2GRASP: From Perception and Reasoning to Grasping
- University of Southern California with the proposal Persistent and Persuasive Personal Robots (P^3R): Towards Networked, Mobile, Assistive Robotics
- University of Tokyo, Jouhou System Kougaku (JSK) Laboratory with the proposal Autonomous Motion Planning for Daily Tasks in Human Environments using Collaborating Robots
iRobot believes that next-generation practical robots have the potential to help caregivers perform critical work and extend the time that people can live independently. Robots may be capable of assisting in senior care in a variety of real-life situations, including household chores and the on-time administration of medication. This could ultimately lower the cost for care.
I’m often asked to define what a social robot is. And the definition that I’ve been using for the past few years is “any robot that is designed to interact with people as part of its functional goal.” I like this definition because it lets the end scenario determine whether or not a robot is social rather than the designer of the robot. For example, a robot could be designed to deliver medicine to a person without very much attention to HRI, focusing only on navigation and planning, but I would still call this a social robot (albeit not likely to be a successful one).
So in this definition the Roomba is not really a social robot. When functioning properly, you shouldn’t have to interact with it very much at all. Ideally it’s mostly functioning when you are away. This move into healthcare robotics now sends iRobot squarely into the domain of social robots, designed to interact with humans as part of their functional goal. It will be exciting to see this develop!
A new company, ToyBots, was announced at the TechCrunch 50 event this week. Their vision is a combination of social networking, online gaming, and robot-toys. Similar in spirit to Webkins or ClubPenguin, but creating a tighter coupling between the physical toy and the virtual world.
Interestingly this company isn’t exactly focused on developing the end-user scenarios, they want to provide the mechanism and infrastructure for other people to develop robot toys that can be connected through games and social networks online. Like an AppStore for robot toys they say, calling it “the internet of things.”
Looks fun, will be interesting to see where this goes. And it’s great to see social robots in the TechCrunch 50!
- Robots, standup and be counted
- AAMAS 2010 best paper goes to Social Learning
- Kemp demos PR2 on CNN
- Google &hearts Robots
- Probabilities in Hardware
- ICDL 2010
- Robots 2012, Your Tax Dollars at Work
- Autom’s On the Way
- Don’t miss the AAAI 2010 Robotics Exhibit
- NYT on Robot Learning
- NYT on Robot Companions
- Friday Fun, Robot Babies Graphic
- Andrea L. Thomaz
- Socially Intelligent Machines Lab, Georgia Tech
- Personal Robots Group, MIT
- The Robot Report
- Service Robotics Blog
- Machine Learning (Theory)
- October 2010 (4)
- September 2010 (1)
- August 2010 (3)
- July 2010 (4)
- June 2010 (4)
- May 2010 (3)
- April 2010 (3)
- December 2009 (2)
- November 2009 (3)
- October 2009 (2)
- September 2009 (5)
- August 2009 (5)
- GT Lab Updates
- In the News
- Machine Learning
- Situated Learning
- SWMR Guest