So, Where’s My Robot?

Thoughts on Social Machine Learning

SWMR: Odest Chadwicke Jenkins

A contribution to the SWMR Series

“Robotics off the shelf: stronger, faster, cheaper … now what?”

Like the development of personal computers through the 1970s and 80s, an explosion of increasingly stronger, faster and cheaper robot platforms are emerging and becoming available as commercial-off-the-shelf products. These robots have a growing capability to identify relevant aspects of varied environments, find and manipulate objects of interest, traverse diverse terrain and act in a socially acceptable manner.  As these robots make their way into society, there are questions to address: How will society use these robots? What are the uses we have yet to dream up? How does artificial intelligence meet these needs?

Technological revolutions like these are driven by a synergy between hardware platforms that manipulate physics and software that enables user applications, so a robot platform is only as good as the applications where it can be utilized.  During my formative years of the 1980s, the personal computer was mostly an expensive novelty device with specialized applications that were often difficult to run with tedious user interfaces.  Computing of this era was driven by slow systems with command-line interfaces and floppy disk drives that are a far cry from today’s user friendly systems. Relatively few were willing to climb the learning curve for applications such as VisiCalc, an early spreadsheet, or Summer Games on an Apple IIe, Commodore 64, or IBM PC.  Over time, developments in software created the synergy between hardware and software development where advances on one side pushed the other side to meet and exceed new requirements.  As a result, we now have a wealth of highly relevant and crucial software applications on a variety of computing devices, from desktops to supercomputers to smartphones.  More importantly, our modern computing culture has increasingly succeeded in enabling greater populations of people to explore new forms of content and new applications without specialized training in computing.  Brooks describes these trends as “exponentials” and provides a more in depth treatment of the relationship between robotics and general computing exponentials in his recent talks (http://fora.tv/2009/05/30/Rodney_Brooks_Remaking_Manufacturing_With_Robotics) as well as recent robotics roadmapping efforts (http://www.us-robotics.us/).

While I see robotics following a similar evolution to personal computing, there are two issues that make the robotics revolution distinctly challenging: uncertainty and purpose.  The growth of personal computing has been due in large part to the “write local, run global” approach to software development.  That is, a program written by a developer (write local) will reliably perform the same way when distributed to users across the world (run global) as for the original developer. Write local, run global is enabled due to reliable modeling of information through manipulating the physics of electricity in closed and controlled systems buried deep inside computing devices. In robotics, however, physical interactions are much more messy and uncertain.

Consider the task of taking out the trash, let’s say given an iRobot PackBot or a Willow Garage PR2.  The steps to do this at a workplace may involve taking a bin from beside your desk in your office to a larger receptacle within the building.  At home, that task will be different as it may be behind a cabinet door and may need to be taken outside. There may be an elevator at work or stairs at home.

It appears to be a simple task, but the rote programming of such a task requires the ability to recognize the object “trash can” from its appearance, determine how to grasp, carry and unload the bin without making a mess, as well as specific knowledge of the environment. Developing such a robot controller, or software, will surely require specialized training for computer programming as well as a significant cost in time and effort.  Even after this controller was developed, our robot would only know how to remove trash in these two specific scenarios and potentially only for that user.  Additional users may have their own distinct desires such as how certain bins should be carried to avoid damage, separate handling of recycling and interacting with household pets.  And what happens when a user wants to repurpose the robot for a new task that the developer has yet to consider and implement?  Will human users be able to adapt to these new capabilities and even develop their own?  Just as computer scientists likely have a different vision for a website than graphic artists, there may be uses for robots that have yet to be considered by some scientists, but when robotics is made accessible, will become an emerging area for innovation.

Robot “learning from demonstration” (LfD) has emerged as a compelling direction for addressing the above issues by enabling users to create robot controllers and applications through instruction.  Through LfD, robots are programmed implicitly from a user’s demonstration (or other forms of guidance) rather than explicitly through an intermediate form (e.g., hardcoded program) or task-unrelated secondary skills (e.g., computer programming).  The intended behavior for a robot is “learned” from demonstrated examples of a human users intention.  The key to unlocking the user’s desired robot controller lies in finding the hidden structure within this demonstration data.

Two trends in artificial intelligence give me strong belief that such robot LfD will become a reality.   First, our ability to collect and process massive amounts of data for various problems has greatly improved.  Successful examples include the use of Google for web search, reCAPTCHA for optical character recognition and emerging tools such as the Amazon Mechanical Turk.  Second, progress in robot LfD is increasingly showing signs that many of the algorithmic pieces are in place to learn from human users.  For example, my research group has been able to use LfD for various robotic tasks, such as enabling the iRobot PackBot to follow people and recognize their gestures and to acquire soccer skills for Sony AIBO robot dogs.  Our work is only a small slice of the accomplishmentsacross the world, which includes learning tasks ranging from simple object fetching, to cooperative object stacking with humans, to highly dynamic ball-in-cup games and aerial flight maneuvers.  As robot platforms and demonstration data collection increases, my conjecture is that learning algorithms for robot LfD will truly take hold.

Odest Chadwicke Jenkins
Assistant Professor of Computer Science
Brown University

September 2nd, 2009 Posted by | HRI, Industry, Machine Learning, SWMR Guest | one comment

Enter your password to view comments.

1 Comment »

No comments yet.

Leave a comment