So, Where’s My Robot?

Thoughts on Social Machine Learning

SWMR: Rebecca Grinter

A contribution to the SWMR Series

When Andrea asked me to write a piece for her blog, the first thing I did was ponder the phrase “so where’s my robot?” Of course, what she means is where is my own robot, why do I not yet live with this technology, and as she compellingly argues, not just here, but in her research, one significant problem is the mis-match between human and robotic intelligence.  Specifically, until robots are more intelligent in their ability to interact with people, then the range of things that they will be able to do for people is limited in some fundamentally important ways.

But, the same question can be asked by some people to mean something quite different.  In about 2.5 million homes there is a robot—admittedly a rather limited one, but is that so different than the original computers we owned —the Roomba vacuum cleaner. And Roomba is the first vacuum cleaner for which the phrase “so where’s my robot” can and is uttered. Normally, when we think about vacuum cleaning (do any of us think about it, other than being resentful?) we don’t think about where the cleaner has gone. Typically we know since we’re holding on to it in some way. But Roomba is autonomous, and so it goes where it chooses…

And that makes it fascinating. How many of you have come home to discover that Roomba is not in its dock. So where’s my robot becomes a search for the machine. A voyage of discovery, where might it be, under the bed, caught in some electrical cords or, even worse, having managed to use its bumper to shut a door on itself, trapping it in a closet. And, how many of us would admit to feeling a little bad that the device had gotten caught up and ran out of batteries because it pleaded through a series of beeps for us to come retrieve it. Perhaps not, but I promise you that there are people who do feel bad, and if you think you’re not among them and you don’t own one, I suggest you experience it.

Roomba does more than induce a search and rescue operation in the home. It also inspires other types of behaviour. Perhaps the one that interests me the most is some people dress it up. In fact, enough people do this that there’s a company that makes a business out of selling costumes for Roomba. A business! I try to imagine dressing my upright vacuum, perhaps in a cape, perhaps as Super Vacuum Dirt Buster at Large? But it doesn’t work. By which I mean that it just doesn’t make any sense now does it. But, for some people dressing the Roomba seems like fun, and then watching that costumed appliance cruise the floors of the house, well that is amusing, and it doesn’t seem all together as wrong.

I understand that for some roboticists, the Roomba is not exciting. It is a relatively simple machine, perhaps almost non-robotic. I want to remind them that it is just the beginning, and it is a good beginning. It has turned the experience of robots from being something that one saw in films or read about in books. It is now a lived experience, and one we can learn from. What interests me is not just what robots can do for people, but what people want to do with and potentially even for robots. And above all else, the first time that someone came home and wondered where their robot had gone marks an important change in society, from a time when where the robot was a question of any robot to a time soon coming where it will be a question about a particular robot or a specific function set.

Where’s your robot. It’s coming.

Rebecca Grinter
Associate Professor of Interactive Computing
Georgia Institute of Technology

September 28th, 2009 Posted by | SWMR Guest | no comments

Should your robot learn like a child?

Alison Gopnik recently had an opinion article in the NYTimes.  Gopnik is a Psychologist that studies child development and “Theory of Mind.”

I find much of Gopnik’s work inspiring for robot learning, and the ideas in this article are a good example.  She lays out evidence and findings related to the difference between adult and child learning.  In many ways children are much better at learning and exploring than adults.  They observe and create theories that are consistent with a keen probabilistic analysis of seen events.  These theories guide their “play” or exploration in a way that efficiently gathers information about their complex and dynamic world.

The description of adult versus child-like learning sounds like the traditional explore/exploit tradeoff in machine learning.  But this raises a question we are often asked with respect to robot learning, do we actually want robots to explore like children?  I think the answer is yes and no.  We probably don’t want robots to need a babysitter, but we do want robots to exhibit the kind of creativity and experimentation that you see in some of Gopnik’s studies of causal structure for example.

I’m most excited about the idea that Gopnik ends the article with: “But what children observe most closely, explore most obsessively and imagine most vividly are the people around them. There are no perfect toys; there is no magic formula. Parents and other caregivers teach young children by paying attention and interacting with them naturally and, most of all, by just allowing them to play.”

I think that the importance of social learning in human development is a strong argument for robot learning by demonstration or instruction—that we should be looking for the short cuts and computational gains we can get from leveraging a partner.

September 22nd, 2009 Posted by | Machine Learning, Situated Learning | no comments

ToyBots

A new company, ToyBots, was announced at the TechCrunch 50 event this week.  Their vision is a combination of social networking, online gaming, and robot-toys.  Similar in spirit to Webkins or ClubPenguin, but creating a tighter coupling between the physical toy and the virtual world.

Interestingly this company isn’t exactly focused on developing the end-user scenarios, they want to provide the mechanism and infrastructure for other people to develop robot toys that can be connected through games and social networks online.  Like an AppStore for robot toys they say, calling it “the internet of things.”

Looks fun, will be interesting to see where this goes.  And it’s great to see social robots in the TechCrunch 50!

September 15th, 2009 Posted by | Industry | no comments

ISRR 2009

I had the pleasure of attending the 2009 International Symposium on Robotics Research.  I was invited to talk in the Human-Robot Interface session, and gave a brief over view of some recent work and discussed how we are working to address the guidance-exploration spectrum of social learning.

This was by far the best robotics conference I’ve been to in a while.  The diversity of topics, quality presentations, and highly engaged audience was great.  Here are some highlights, in no particular order:

  • Sami Haddadin presented work from DLR about a robot co-worker, including previous work on safe robot control, and newer work that involves sensing a human in the workspace and the interaction schemes that are appropriate for different co-working scenarios.
  • Russ Tedrake presented his recent work on building robots that fly like birds! (e.g., perching on a string)
  • Hiroshi Okuno gave a talk about robot audition, and demonstrated their system for speaker disambiguation using just 2 microphones.  It looks great, and is freely available.
  • Marc Pollefeys’ 3D reconstruction from video (in real time!) was quite impressive.
  • There was a presentation about the HRP-4C, which has been all over youtube for some time now — but I had not yet seen this video which the speaker announced as the “worlds first robot bride
  • Dillman’s lab at Karlsruhe presented their work on interactive learning in the humanoids session, showing lots of great video of their robots doing kitchen tasks.
  • Prof. Inaba presented an overview of their lab’s work at the University of Tokyo, and the shear number and diversity of robots had their American colleagues drooling.  This is where it shows that US robotics research doesn’t get near the level or longevity of funding you see in Japan and the EU.

September 7th, 2009 Posted by | Conferences | no comments

SWMR: Odest Chadwicke Jenkins

A contribution to the SWMR Series

“Robotics off the shelf: stronger, faster, cheaper … now what?”

Like the development of personal computers through the 1970s and 80s, an explosion of increasingly stronger, faster and cheaper robot platforms are emerging and becoming available as commercial-off-the-shelf products. These robots have a growing capability to identify relevant aspects of varied environments, find and manipulate objects of interest, traverse diverse terrain and act in a socially acceptable manner.  As these robots make their way into society, there are questions to address: How will society use these robots? What are the uses we have yet to dream up? How does artificial intelligence meet these needs?

Technological revolutions like these are driven by a synergy between hardware platforms that manipulate physics and software that enables user applications, so a robot platform is only as good as the applications where it can be utilized.  During my formative years of the 1980s, the personal computer was mostly an expensive novelty device with specialized applications that were often difficult to run with tedious user interfaces.  Computing of this era was driven by slow systems with command-line interfaces and floppy disk drives that are a far cry from today’s user friendly systems. Relatively few were willing to climb the learning curve for applications such as VisiCalc, an early spreadsheet, or Summer Games on an Apple IIe, Commodore 64, or IBM PC.  Over time, developments in software created the synergy between hardware and software development where advances on one side pushed the other side to meet and exceed new requirements.  As a result, we now have a wealth of highly relevant and crucial software applications on a variety of computing devices, from desktops to supercomputers to smartphones.  More importantly, our modern computing culture has increasingly succeeded in enabling greater populations of people to explore new forms of content and new applications without specialized training in computing.  Brooks describes these trends as “exponentials” and provides a more in depth treatment of the relationship between robotics and general computing exponentials in his recent talks (http://fora.tv/2009/05/30/Rodney_Brooks_Remaking_Manufacturing_With_Robotics) as well as recent robotics roadmapping efforts (http://www.us-robotics.us/).

While I see robotics following a similar evolution to personal computing, there are two issues that make the robotics revolution distinctly challenging: uncertainty and purpose.  The growth of personal computing has been due in large part to the “write local, run global” approach to software development.  That is, a program written by a developer (write local) will reliably perform the same way when distributed to users across the world (run global) as for the original developer. Write local, run global is enabled due to reliable modeling of information through manipulating the physics of electricity in closed and controlled systems buried deep inside computing devices. In robotics, however, physical interactions are much more messy and uncertain.

Consider the task of taking out the trash, let’s say given an iRobot PackBot or a Willow Garage PR2.  The steps to do this at a workplace may involve taking a bin from beside your desk in your office to a larger receptacle within the building.  At home, that task will be different as it may be behind a cabinet door and may need to be taken outside. There may be an elevator at work or stairs at home.

It appears to be a simple task, but the rote programming of such a task requires the ability to recognize the object “trash can” from its appearance, determine how to grasp, carry and unload the bin without making a mess, as well as specific knowledge of the environment. Developing such a robot controller, or software, will surely require specialized training for computer programming as well as a significant cost in time and effort.  Even after this controller was developed, our robot would only know how to remove trash in these two specific scenarios and potentially only for that user.  Additional users may have their own distinct desires such as how certain bins should be carried to avoid damage, separate handling of recycling and interacting with household pets.  And what happens when a user wants to repurpose the robot for a new task that the developer has yet to consider and implement?  Will human users be able to adapt to these new capabilities and even develop their own?  Just as computer scientists likely have a different vision for a website than graphic artists, there may be uses for robots that have yet to be considered by some scientists, but when robotics is made accessible, will become an emerging area for innovation.

Robot “learning from demonstration” (LfD) has emerged as a compelling direction for addressing the above issues by enabling users to create robot controllers and applications through instruction.  Through LfD, robots are programmed implicitly from a user’s demonstration (or other forms of guidance) rather than explicitly through an intermediate form (e.g., hardcoded program) or task-unrelated secondary skills (e.g., computer programming).  The intended behavior for a robot is “learned” from demonstrated examples of a human users intention.  The key to unlocking the user’s desired robot controller lies in finding the hidden structure within this demonstration data.

Two trends in artificial intelligence give me strong belief that such robot LfD will become a reality.   First, our ability to collect and process massive amounts of data for various problems has greatly improved.  Successful examples include the use of Google for web search, reCAPTCHA for optical character recognition and emerging tools such as the Amazon Mechanical Turk.  Second, progress in robot LfD is increasingly showing signs that many of the algorithmic pieces are in place to learn from human users.  For example, my research group has been able to use LfD for various robotic tasks, such as enabling the iRobot PackBot to follow people and recognize their gestures and to acquire soccer skills for Sony AIBO robot dogs.  Our work is only a small slice of the accomplishmentsacross the world, which includes learning tasks ranging from simple object fetching, to cooperative object stacking with humans, to highly dynamic ball-in-cup games and aerial flight maneuvers.  As robot platforms and demonstration data collection increases, my conjecture is that learning algorithms for robot LfD will truly take hold.

Odest Chadwicke Jenkins
Assistant Professor of Computer Science
Brown University

September 2nd, 2009 Posted by | HRI, Industry, Machine Learning, SWMR Guest | one comment