So, Where’s My Robot?

Thoughts on Social Machine Learning

SWMR: Guy Hoffman

A contribution to the SWMR Series

“Robots: you have a body – use it!”

So where’s my robot?
Let’s focus on the my part of this question:

To me, “my” robot means a robot that just gets me. One that understands me on an intuitive, physical, gut-feeling level. A robot that moves in sync with my rhythm, that anticipates my every move and wish. One that’s exactly where I want it, and precisely when I want it to be there.

So when I envision the future of personal robotics, I imagine machines that move around the home or workplace in perfect synchrony with us; dance a subtle dance with the humans in their surroundings; robots that seem to appear out of thin air just where you want them, and a second before you really devised the request. And not in a creepy sneaking-up-on-you kind of way, but rather in a falling-in-love I-can’t-believe-we-just-said-that-at-the-same-time kind of way.

Whether we’re talking about robotic assistants to surgeons, or a machine that helps you unpack after a move – personal robots will only truly deserve their adjective when they adapt their physical movements to ours. It’s us humans’ fault: we’re just suckers for good timing.

While well-trained human teams can easily display such a beehive-like flutter of coordinated activities (don’t you love watching pit stop teams?), most robots interacting with humans today drag us into what amounts pretty much to a waiting game. It’s slow, choppy, unintuitive, and is usually structured in a somewhat tedious turn-taking fashion.

Why is that? What stands between us and my harmonic utopia of synchronized personal robotics? My hunch says it’s the brain. Ours and theirs. Could it be that most people who have been devising robots in academia and industry have been, well, too cerebral?

The pun is well intended. Considering that most robots are made by us mega-geeks, it’s no accident that robots are designed as brains with bodies, as computers with motors and sensors, and not the other way around. We CS types like to think in abstractions, in black boxes, in data and control flow, in learning as rules and as structured information. And most of us don’t like to exercise.

But when we think about humanity’s most important and impressive behaviors, from walking and grasping objects, to collaboration and communication, and to artistic and athletic performance – we don’t learn it by making rules and categorizing decisions. Or at least we don’t get very good at it.

Instead we get good by practicing, by putting our bodies in there, and “there” is usually something repetitive and difficult. One cannot learn to ski in the classroom, and dance ensembles cannot perfect their fluency through written correspondence. You can read all you want about how to play the piano, but to play the piano well you need to actually go over the same passages again and again. And just like the visit to the dentist in Jarmush’s “Coffee and Cigarettes”, you have to do it yourself, with your own body. You can’t have some intern send you the Cliff notes for Chopin’s Waltz Op. 34 No. 2.

This is especially true in collaborative activities. You and your partner can practice the tango all week long in your own separate studios; if you don’t rehearse together, using your own co-located bodies, the chance that you’ll be ready for showtime is close to nil.

One of the most fascinating questions to me these days is what this thing called practice really is, and how it is distinct from traditional notions of learning. Why is it embodied, and what does this endless repetition bring?

I believe we are bodies with brains. And so should robots be. When modeling practice, we should try to steer away from information-driven models, and instead make the robots use their bodies, for example by modeling activity in perception and action networks, and by exploring physical repetition. In my own work, I’ve seen that people seem to connect on a deep emotional level to robots that improve through practice in subtle physical ways, anticipating the human’s motions, and moving in sync with them.

Who knows where those future robots will practice. Maybe there will be a robot playground that they have to spend some time in, between factory and consumer. And maybe some of the practice will be at the customer’s homes. Will consumers have the patience for their new robot to get better at its tasks after they already bought it?

My non-cerebral embodied gut-feeling says they will.

Guy Hoffman
Postdoctoral Associate
Georgia Institute of Technology Center for Music Technology

December 1st, 2009 Posted by | SWMR Guest | 2 comments

SWMR: Rebecca Grinter

A contribution to the SWMR Series

When Andrea asked me to write a piece for her blog, the first thing I did was ponder the phrase “so where’s my robot?” Of course, what she means is where is my own robot, why do I not yet live with this technology, and as she compellingly argues, not just here, but in her research, one significant problem is the mis-match between human and robotic intelligence.  Specifically, until robots are more intelligent in their ability to interact with people, then the range of things that they will be able to do for people is limited in some fundamentally important ways.

But, the same question can be asked by some people to mean something quite different.  In about 2.5 million homes there is a robot—admittedly a rather limited one, but is that so different than the original computers we owned —the Roomba vacuum cleaner. And Roomba is the first vacuum cleaner for which the phrase “so where’s my robot” can and is uttered. Normally, when we think about vacuum cleaning (do any of us think about it, other than being resentful?) we don’t think about where the cleaner has gone. Typically we know since we’re holding on to it in some way. But Roomba is autonomous, and so it goes where it chooses…

And that makes it fascinating. How many of you have come home to discover that Roomba is not in its dock. So where’s my robot becomes a search for the machine. A voyage of discovery, where might it be, under the bed, caught in some electrical cords or, even worse, having managed to use its bumper to shut a door on itself, trapping it in a closet. And, how many of us would admit to feeling a little bad that the device had gotten caught up and ran out of batteries because it pleaded through a series of beeps for us to come retrieve it. Perhaps not, but I promise you that there are people who do feel bad, and if you think you’re not among them and you don’t own one, I suggest you experience it.

Roomba does more than induce a search and rescue operation in the home. It also inspires other types of behaviour. Perhaps the one that interests me the most is some people dress it up. In fact, enough people do this that there’s a company that makes a business out of selling costumes for Roomba. A business! I try to imagine dressing my upright vacuum, perhaps in a cape, perhaps as Super Vacuum Dirt Buster at Large? But it doesn’t work. By which I mean that it just doesn’t make any sense now does it. But, for some people dressing the Roomba seems like fun, and then watching that costumed appliance cruise the floors of the house, well that is amusing, and it doesn’t seem all together as wrong.

I understand that for some roboticists, the Roomba is not exciting. It is a relatively simple machine, perhaps almost non-robotic. I want to remind them that it is just the beginning, and it is a good beginning. It has turned the experience of robots from being something that one saw in films or read about in books. It is now a lived experience, and one we can learn from. What interests me is not just what robots can do for people, but what people want to do with and potentially even for robots. And above all else, the first time that someone came home and wondered where their robot had gone marks an important change in society, from a time when where the robot was a question of any robot to a time soon coming where it will be a question about a particular robot or a specific function set.

Where’s your robot. It’s coming.

Rebecca Grinter
Associate Professor of Interactive Computing
Georgia Institute of Technology

September 28th, 2009 Posted by | SWMR Guest | no comments

SWMR: Odest Chadwicke Jenkins

A contribution to the SWMR Series

“Robotics off the shelf: stronger, faster, cheaper … now what?”

Like the development of personal computers through the 1970s and 80s, an explosion of increasingly stronger, faster and cheaper robot platforms are emerging and becoming available as commercial-off-the-shelf products. These robots have a growing capability to identify relevant aspects of varied environments, find and manipulate objects of interest, traverse diverse terrain and act in a socially acceptable manner.  As these robots make their way into society, there are questions to address: How will society use these robots? What are the uses we have yet to dream up? How does artificial intelligence meet these needs?

Technological revolutions like these are driven by a synergy between hardware platforms that manipulate physics and software that enables user applications, so a robot platform is only as good as the applications where it can be utilized.  During my formative years of the 1980s, the personal computer was mostly an expensive novelty device with specialized applications that were often difficult to run with tedious user interfaces.  Computing of this era was driven by slow systems with command-line interfaces and floppy disk drives that are a far cry from today’s user friendly systems. Relatively few were willing to climb the learning curve for applications such as VisiCalc, an early spreadsheet, or Summer Games on an Apple IIe, Commodore 64, or IBM PC.  Over time, developments in software created the synergy between hardware and software development where advances on one side pushed the other side to meet and exceed new requirements.  As a result, we now have a wealth of highly relevant and crucial software applications on a variety of computing devices, from desktops to supercomputers to smartphones.  More importantly, our modern computing culture has increasingly succeeded in enabling greater populations of people to explore new forms of content and new applications without specialized training in computing.  Brooks describes these trends as “exponentials” and provides a more in depth treatment of the relationship between robotics and general computing exponentials in his recent talks (http://fora.tv/2009/05/30/Rodney_Brooks_Remaking_Manufacturing_With_Robotics) as well as recent robotics roadmapping efforts (http://www.us-robotics.us/).

While I see robotics following a similar evolution to personal computing, there are two issues that make the robotics revolution distinctly challenging: uncertainty and purpose.  The growth of personal computing has been due in large part to the “write local, run global” approach to software development.  That is, a program written by a developer (write local) will reliably perform the same way when distributed to users across the world (run global) as for the original developer. Write local, run global is enabled due to reliable modeling of information through manipulating the physics of electricity in closed and controlled systems buried deep inside computing devices. In robotics, however, physical interactions are much more messy and uncertain.

Consider the task of taking out the trash, let’s say given an iRobot PackBot or a Willow Garage PR2.  The steps to do this at a workplace may involve taking a bin from beside your desk in your office to a larger receptacle within the building.  At home, that task will be different as it may be behind a cabinet door and may need to be taken outside. There may be an elevator at work or stairs at home.

It appears to be a simple task, but the rote programming of such a task requires the ability to recognize the object “trash can” from its appearance, determine how to grasp, carry and unload the bin without making a mess, as well as specific knowledge of the environment. Developing such a robot controller, or software, will surely require specialized training for computer programming as well as a significant cost in time and effort.  Even after this controller was developed, our robot would only know how to remove trash in these two specific scenarios and potentially only for that user.  Additional users may have their own distinct desires such as how certain bins should be carried to avoid damage, separate handling of recycling and interacting with household pets.  And what happens when a user wants to repurpose the robot for a new task that the developer has yet to consider and implement?  Will human users be able to adapt to these new capabilities and even develop their own?  Just as computer scientists likely have a different vision for a website than graphic artists, there may be uses for robots that have yet to be considered by some scientists, but when robotics is made accessible, will become an emerging area for innovation.

Robot “learning from demonstration” (LfD) has emerged as a compelling direction for addressing the above issues by enabling users to create robot controllers and applications through instruction.  Through LfD, robots are programmed implicitly from a user’s demonstration (or other forms of guidance) rather than explicitly through an intermediate form (e.g., hardcoded program) or task-unrelated secondary skills (e.g., computer programming).  The intended behavior for a robot is “learned” from demonstrated examples of a human users intention.  The key to unlocking the user’s desired robot controller lies in finding the hidden structure within this demonstration data.

Two trends in artificial intelligence give me strong belief that such robot LfD will become a reality.   First, our ability to collect and process massive amounts of data for various problems has greatly improved.  Successful examples include the use of Google for web search, reCAPTCHA for optical character recognition and emerging tools such as the Amazon Mechanical Turk.  Second, progress in robot LfD is increasingly showing signs that many of the algorithmic pieces are in place to learn from human users.  For example, my research group has been able to use LfD for various robotic tasks, such as enabling the iRobot PackBot to follow people and recognize their gestures and to acquire soccer skills for Sony AIBO robot dogs.  Our work is only a small slice of the accomplishmentsacross the world, which includes learning tasks ranging from simple object fetching, to cooperative object stacking with humans, to highly dynamic ball-in-cup games and aerial flight maneuvers.  As robot platforms and demonstration data collection increases, my conjecture is that learning algorithms for robot LfD will truly take hold.

Odest Chadwicke Jenkins
Assistant Professor of Computer Science
Brown University

September 2nd, 2009 Posted by | HRI, Industry, Machine Learning, SWMR Guest | one comment