It’s the DARPA grand challenge on steroids, a Prius driving in traffic!
This is cool in and of itself for robotics, but even better is a company with huge resources deciding to put a lot of effort into robotics. They have been quite hush hush about what they might be working on, but have quietly been inviting lots of top notch robotics researchers to take a 1..2…n year sabbatical from their academic jobs and come work on robotics at google. So I expect this is the first of many cool things to come out of Google-Robotics.
Lead by my colleague Henrik Christensen, there has been an effort over the past year in the robotics community to define a “roadmap” for robotics research in the US. Supported by the Computing Community Consortium, four workshops were held that brought together representatives from industry and academia to outline the most fruitful research questions for the US to be pursuing. These discussions were centered on four focus areas: Manufacturing, Healthcare, Service Robots, and Emerging Technologies.
These discussions resulted in a concrete roadmap that outlines a research agenda for non-military robotics research. US funding agencies have a strong history of supporting research for military robotics, but there’s been very little funding available for non-military applications. So this is what Henrik and Co. went and told the congressional caucus on robotics on May 21st. They presented the roadmap and made the case for US research funding to be specifically allocated for non-military robotics through such agencies as the NSF, the NIH, and others.
In addition to the presentation, there was a demo session for congressional staffers. My student, Maya Cakmak took our interactive learning demonstration. So, our robots have been to Congress, cool!
It has been very exciting to be a part of the roadmap discussion, and we’re all looking forward to seeing what impact it might have on future funding initiatives for robotics.
Congrats to the folks at Willow Garage, they’ve announced that PR2 passed it’s second milestone (opening doors, and plugging itself into power outlets). I got to see individual demos of the door opening and outlet plugging when I visited their lab in March. So, great to see the progress on integration and robustness.
Hope this means we’ll soon see the call for proposals for the PR2‘s that WG is going to be sending out into the world!
The short story: Randy has been fighting cancer, and recently found out that the treatment hasn’t worked, and he has only a few months. Randy is famous for co-founding the Entertainment Technology Center at CMU, and for directing the Alice software project. The lecture is a very thoughtful reflection on his life and work and is an inspiration.
Language Learning Keynote — Luc Steels gave a nice talk about language learning. His goal is for machines to be able to really speak and understand language, and his approach is that of emergent communication. The idea is that language is never perfect, people communicate by continually repairing misunderstandings and continually building up a common ground or common language, “inventing” new terms, etc. This is much different than a corpus based machine learning approach that assumes a stationary environment and other such things. So, his platform for studying this kind of emergent language development, or learning all the time approach for language, is language games for robots. He has some interesting examples with QRIO robots. It was a bit unclear how the all important repair phase was instigated or took place. But it’s great to see such an interactive alternative to natural language processing.
Human-Robot Interactive Teaching — I was in a session organized by Joe Sanders and Chystopher Nehaniv. You can read more about my talk. I enjoyed Sylvain Calinon’s talk in the session about “Active Teaching”. There are a lot of people working on robot “Learning by Imitation” But his work addresses an important question about the nature of an imitation interaction. Human imitation learning is an active process. Someone is teaching you something, they demonstrate it, you try it, you get it wrong, they go into more detail on a particular aspect, and on and on. Sylvain has some nice work on teaching a robot a task by showing it the physical movement. It then copies the movement, and you can stop the robot and select particular motors, and physically move it through the motion during the correction process. There’s more work to be done on the interaction, and I don’t think they’ve tried it with non-expert users yet. But this is work to keep an eye on.
The Uncanny Valley — I’m sorry, but I missed the uncanny valley boat. I understand the qualitative assessment that there is a class of things (like zombies, and almost human things) that people find creepy. And that many robots fall into this category either by the way they look or the way they behave. But from what I have seen and read, it seems that we all basically agree that 2 dimensions are really not enough to represent what it is that people find creepy. And that this 2-d graph of Mori’s is not based on quantitative data that we can measure our robots against. So, I’m always amazed at how many people show graphs of the uncanny valley in their talks. Someone, please enlighten me in the comments, why should I care about this uncanny valley graph?
Robot design workshop — This session was one of the better ones that I saw. Prof. Myung Suk Kim has a lab at KAIST that does some great work on robot design. He gave a presentation giving an overview of the work going on in his lab. Jodi Forlizzi talked about 5 aspects of designing for HRI: gaze (social attention), speech/sound, gesture, motion, and personality. And Tomotaka Takahashi, a PhD student at Kyoto university, gave a talk about two robots that he has built, the Chroino and the FT (for female type…). They are pretty cute, and the Chroino was licensed and developed into a product: Manoi.
Four Keepons dancing! — Hideki Kozima and Marek Michalowski had a nice showing in the demonstrations event, four keepon robots side-by-side dancing away and interacting with people. They were in the middle of a Keepon tour, first winning the Robots at Play design competition in Denmark, then RO-MAN, and now they’re at NextFest in LA with Wired. Wired likes Keepon so much they did a response video with Spoon. It’s good clean robot fun.
Meeting a reader — I met one of the readers of this blog, which doesn’t happen in person very often, so that was fun! She was concerned that I’m going to quit blogging once I get busy with the new job. But, that’s not the intention, I plan to make time to keep the Social Robot research conversation going in the blog-o-sphere.
I really liked Korea, the people were extremely friendly. I will definitely have to make a trip back, perhaps to visit Robot Land!
Earlier this month I attended the 15th IEEE Symposium on Robot and Human Interactive Communication (RO-MAN 2006), held just outside of London in Hatfield, UK home of the University of Hertfordshire (and the oldest pub in England!).
The conference was kicked off with a keynote talk by Shuji Hashimoto of Waseda University in Japan. Hashimoto-san’s talk characterized the big difference between this conference and most robotics conferences…the focus on the human element, the importance of designing robots that are social and fit seamlessly into our human world. In his talk he posited that the robot industry has not yet launched and will not launch until we usher in a new era of “partner” robots. And what he calls robots with “Kansei”, English translation: we need “machines with heart”. Which was a provocative enough topic to get a little press coverage.
There were many topics and papers at the conference relevant to this blog, but for now I’ll highlight some papers presented about learning and teamwork.
Y. Demiris, B. Khadhouri, “Content-Based Control of Goal-Directed Attention During Human Action Perception” Yiannis Dimiris has a history of work in imitation learning, specifically on the ideas of learning forward models. This paper was about how to use forward models for goal-directed attention in an imitation task. The imitator should use the forward models to direct their attention to what object or part of the environment you would be controlling if your were doing the action the demonstrator is doing.
J. Sanders et al., “Using self-imitation to Direct Learning”
This talk was interesting because it had some fun examples of social scaffolding in chimps. Mother chimps scaffold the child’s learning of nut cracking by putting the tools and nuts in favorable positions. Another example was with chimps Washoe and Loulis, Washoe taught Loulis American Sign Language by demonstration and by putting Loulis’ hand in the correct positions. Their point of these examples is the importance of a teacher being able to physically manipulate the learner as part of the learning process. This type of embodied demonstration is common in nature and often overlooked in robot learning.
C. Calinon and A. Billard, “Teaching a Humanoid Robot to Recognize and Reproduce Social Cues“
This paper is mostly about a gesture recognition/learning system. But the part I find most interesting is that the goal of these social cues is to let the robot participate in an imitation game where social cues actually frame the learning process. This is in contrast to much of the work in Learning by Demonstration that assumes nicely segmented examples from the human teacher.
A. Thomaz, G. Hoffman, and C. Breazeal, “Reinforcement Learning with Human Teahers“
Our paper was about an exploration of how everyday people approach the task of teaching a reinforcement learning agent in a computer game. They have more positive than negative feedback, they try to guide as well as feedback, and they adjust behavior as they develop a model of the learner.
There were two or three sessions on human-robot collaboration and teamwork. One aspect that’s interesting to note is that most were Wizard of Oz studies (i.e., there is a human remote controlling the robot during the task). Presumably because there aren’t any robots out there that are capable enough to participate in an interesting teamwork task with a human. So, I just think it’s interesting to think about what we’re learning from Wizard of Oz studies. The goal is for them to give us design principles for building autonomous robots. But to play devils advocate, by giving a robot a “human brain” and having it collaborate with a person, does this just reduce to a study of handicapped human-human collaboration? Anyhow, here are a couple of nice papers from the H-R collaboration sessions.
Kim and Hinds, “Who should I blame? Effects of Autonomy and Transparency on Attributions in Human Robot Itneraction”
In human robot teamwork, who gets credit and who gets blame. In this paper they report a series of studies looking at how different levels of autonomy and different levels of transparency (the robot explaining it’s behavior) effected people’s judgement. Increased autonomy of a robot, increases blame more than credit; But increased transparency decreases the blame toward the robot.
Oestreicher and Eklundh, “User Expectations on Human-Robot Cooperation”
Their study looks at public opinion about human robot teamwork. After interviewing people about what kinds of assistive robots they would want, the authors find that people want a partner not a tool, but they are weary of giving up control. My favorite example was a woman who loved gardening but was physically unable to do it anymore. She didn’t want a robot to do her gardening for her, she wanted the robot to enable her to again participate in this activity she loved.
Who am I? My name is Andrea Thomaz. I’m a postdoc at MIT, working at the Media Lab with Cynthia Breazeal.
What’s this blog about? I’m starting this blog around the topic of Social Machine Learning, which has been the topic of my research for several years now. I’m happy to take suggestions of topics for future posts, email me.
Why a blog?
This is a reserach experiment of mine, so we’ll see if it has legs. The blogging medium is an exciting one for academic research communities, and I believe that we’ll continue to see more people using this medium in addition to the traditional conference and journal channels.
I hope that this can be a useful way for me to vet ideas that are developed and not so developed, comment and critique related research, and highlight news and industry that is related to Social Machine Learning. And, lastly I hope to foster some relationships with likeminded researchers out there. So feel free to post comments or send me email.