Tech

Robots learn to adapt to damage the way animals do

A group of computer scientists has a found a way to allow robots to adapt to handicaps in much the same way animals and humans do.

A robot automatically learns to keep walking after damage via a newly invented "Intelligent Trial & Error'" algorithm.
Source: Antoine Cully/UPMC

If it works, it could mean that less-fragile robots will be better able to work in a wider range of jobs in more treacherous conditions, from building to bomb detection. It also could mark a big step toward creating robots that learn.

Robots have the potential to perform better than human beings at tasks in fields as varied as medicine, manufacturing and even the military. But they lack several key traits of humans and animals, and one of them is the ability to learn how to quickly get back to work when something goes wrong.

Read MoreGood news for America's workers: You matter again

A person who loses the use of an arm can swiftly learn to use the other arm to compensate. A dog that loses a leg can still walk reasonably well on the other three. (Tweet This)

But a broken limb can render a robot completely useless, because most robots do not know how to work around injuries.

Some current methods used with robots involve running diagnostic tests to figure out what is wrong with a broken robot, which can be time-consuming and may require an engineer. Other methods involve a robot testing which out of every possible behavior might compensate for damage. Since there can be an incredibly large number of actions a robot can take, that method can also take a long time.

So scientists at Pierre and Marie Curie University in France and the University of Wyoming gave a robot the sorts of tools animals use to learn—a range of previous experiences to draw on for understanding, and a way of predicting which behaviors are most likely to work in a given situation.

The team published the results in the journal Nature on Wednesday.

The robots in the study were equipped to run simulations that plotted out the best possible actions they could take to perform a task such as walking. The robots ran the simulations when they were first deployed, so they already had the information on hand. The robots were not simply storing information on all of the possible ways they could walk—they were actually predicting the effectiveness of different styles of walking.

Read MoreUnproductive? Blame sleep, not booze

One six-legged walking robot ranked 13,000 styles of walking based on their effectiveness. Once damaged, the robot was able to begin testing the walking styles it predicted to be most effective, slowly ruling out options until it arrived at the best choice.

That's similar to the kind of knowledge animals have—humans, for example, don't simply try new ways of walking at random. Past experience helps them choose the kinds of gaits that are going to be most comfortable, and quickest. The same goes for just about any action animals perform.

That knowledge about which actions are likely to work best is what separates the team's new method from most of those currently available, according to the researchers. They even refer to it as a "simulated childhood," because it mimics the kinds of memories that enable animals to learn new behaviors.

Read MoreDeadly Texas flooding, in photos

"Once damaged, the robot becomes like a scientist," said lead author Antoine Cully in a press release. "It has prior expectations about different behaviors that might work, and begins testing them. However, these predictions come from the simulated, undamaged robot. It has to find out which of them work, not only in reality, but given the damage. Each behavior it tries is like an experiment and, if one behavior doesn't work, the robot is smart enough to rule out that entire type of behavior and try a new type."

The team was able to get their six-legged robot to walk well again in only a few minutes after two of its six legs were broken.

The researchers said their method is faster and cheaper than many existing techniques because it does not require the robot (or its operator) to diagnose a problem—the robot is only measuring its performance, so it simply figures out what other action will work best as its circumstances change.

Read More

The team said its research could work for any kind of action a robot takes. They noted, for example, that robots could become far more effective in search-and-rescue missions. A robot damaged by falling debris could still find a new way to function in a situation where time is short and lives are at stake.

But the team hopes its discovery will allow robots to quickly learn how to do anything. (Tweet This)

"Until now, nearly all approaches for having robots learn took many hours, which is why videos of robots doing anything are often extremely sped up," the team wrote in a statement attached to the study. "Watching them learn in real time was excruciating, much like watching grass grow. Now we can see robots learning in real time, much like you would watch a dog or child learn a new skill."