Several recently developed prototypes show that it is possible to develop autonomous robots with remarkable capabilities. However, currently developing autonomous robots costs a fortune and takes forever! In this post, I treat autonomous cars and unmanned air vehicles as robots.
Sophisticated autonomous robots require hundreds of thousands of lines of code. Manually writing this code for a new robot is very expensive and time consuming. Moreover, as the hardware changes, this code also requires significant upgrades. Often by the time code is written and debugged, the hardware is already obsolete. Therefore, developing autonomous robots is currently technically feasible but not affordable in many applications.
Human operators are very good in teleoperating robots in cluttered, unstructured, and dynamic environments with limited sensor data. Forget about the expert operators teleoperating unmanned vehicles! Even five year olds can learn to teleoperate their first remote control cars within couple of hours and are able to successfully annoy their parents, siblings, and dogs by zipping around tiny cars inside their homes. I have also seen teenagers performing amazing feats with their remotely controlled helicopters. So obviously, we should be interested in characterizing and understanding the strategies employed by human operators during these operations and automatically extracting building blocks of the autonomy code based on this understanding. Many robotics researchers are pursuing this path and this area of robotics is called learning from demonstrations.
Many impressive results ranging from training of surgical robots to teaching collision avoidance to unmanned vehicles have been reported by the learning from demonstrations community. Most such case studies have utilized small number of humans to perform these demonstrations. Because of the limited number of demonstrations, people often wonder how well the learned components of autonomy will perform in situations not encountered during demonstrations. Unfortunately, conducting extensive experiments in physical worlds is highly time consuming and expensive. It also limits the kind of scenarios that can be considered during demonstrations. Clearly a demonstration that might pose threat to the human or the robot has to be avoided. Conducting demonstrations in the virtual world is emerging as an attractive alternative.
Over the last few years, tremendous progress has been made in the area of physics-based robot simulators. For example, the on-going DARPA Robotics Challenge is making an extensive use of simulation technology to test autonomy components. Simulations are being routinely used to teach humans cognitive as well as motor skills. For example, flight simulators are routinely used for pilot training.
By combining advances in multi-player games that can be played over the network and accurate robot simulations, new games can be developed in which humans can compete and collaborate with each other by teleoperating virtual robots. This advancement means that demonstrations need not be confined to few experts. Instead, anyone with an Internet connection can participate in the training of a new robot. For example, DARPA used publicly distributed Anti-Submarine Warfare game to learn how to track quiet submarines. We are ready to leverage crowds to impart autonomy to robots.
The use of crowd sourcing in robot training has many benefits. It provides a rich diversity in demonstrations and hence enhances the probability of generalization. Some of the participants are likely to exhibit out-of-the-box thinking and demonstrate a highly creative or innovative way of doing a task. This is great news for optimizing robot performance. For some people, this way of training robots might serve as a means to earn money by performing demonstrations (basically acting as robot tutors). Playing games that involve robots is likely to be entertaining for at least a segment of the population. This paradigm can also be used in situations where a robot is stuck during a difficult task and needs a creative solution to get out of the bind.
Automatically learning autonomy components such as reasoning rules, controllers, planners etc. from the vast amount of demonstration data is an interesting challenge and will keep the research community busy for many years to come. But this seems to be the much needed crucial advancement to reduce the cost of autonomous robots.
Don’t worry robots! The crowd will rescue you from the dungeons of high cost and long development times!
Sophisticated autonomous robots require hundreds of thousands of lines of code. Manually writing this code for a new robot is very expensive and time consuming. Moreover, as the hardware changes, this code also requires significant upgrades. Often by the time code is written and debugged, the hardware is already obsolete. Therefore, developing autonomous robots is currently technically feasible but not affordable in many applications.
Human operators are very good in teleoperating robots in cluttered, unstructured, and dynamic environments with limited sensor data. Forget about the expert operators teleoperating unmanned vehicles! Even five year olds can learn to teleoperate their first remote control cars within couple of hours and are able to successfully annoy their parents, siblings, and dogs by zipping around tiny cars inside their homes. I have also seen teenagers performing amazing feats with their remotely controlled helicopters. So obviously, we should be interested in characterizing and understanding the strategies employed by human operators during these operations and automatically extracting building blocks of the autonomy code based on this understanding. Many robotics researchers are pursuing this path and this area of robotics is called learning from demonstrations.
Many impressive results ranging from training of surgical robots to teaching collision avoidance to unmanned vehicles have been reported by the learning from demonstrations community. Most such case studies have utilized small number of humans to perform these demonstrations. Because of the limited number of demonstrations, people often wonder how well the learned components of autonomy will perform in situations not encountered during demonstrations. Unfortunately, conducting extensive experiments in physical worlds is highly time consuming and expensive. It also limits the kind of scenarios that can be considered during demonstrations. Clearly a demonstration that might pose threat to the human or the robot has to be avoided. Conducting demonstrations in the virtual world is emerging as an attractive alternative.
Over the last few years, tremendous progress has been made in the area of physics-based robot simulators. For example, the on-going DARPA Robotics Challenge is making an extensive use of simulation technology to test autonomy components. Simulations are being routinely used to teach humans cognitive as well as motor skills. For example, flight simulators are routinely used for pilot training.
By combining advances in multi-player games that can be played over the network and accurate robot simulations, new games can be developed in which humans can compete and collaborate with each other by teleoperating virtual robots. This advancement means that demonstrations need not be confined to few experts. Instead, anyone with an Internet connection can participate in the training of a new robot. For example, DARPA used publicly distributed Anti-Submarine Warfare game to learn how to track quiet submarines. We are ready to leverage crowds to impart autonomy to robots.
The use of crowd sourcing in robot training has many benefits. It provides a rich diversity in demonstrations and hence enhances the probability of generalization. Some of the participants are likely to exhibit out-of-the-box thinking and demonstrate a highly creative or innovative way of doing a task. This is great news for optimizing robot performance. For some people, this way of training robots might serve as a means to earn money by performing demonstrations (basically acting as robot tutors). Playing games that involve robots is likely to be entertaining for at least a segment of the population. This paradigm can also be used in situations where a robot is stuck during a difficult task and needs a creative solution to get out of the bind.
Automatically learning autonomy components such as reasoning rules, controllers, planners etc. from the vast amount of demonstration data is an interesting challenge and will keep the research community busy for many years to come. But this seems to be the much needed crucial advancement to reduce the cost of autonomous robots.
Don’t worry robots! The crowd will rescue you from the dungeons of high cost and long development times!
No comments:
Post a Comment