A computer created by Google taught himself to play Space Invaders



A computer created an online division of American giant Google has taught himself to play 49 video games such as Space Invaders, created by Atari, very popular in the 1980s, is at least as good as a human player for more than half of them, inform bbc.com.

Researchers at Google DeepMind said it was the first time a computer system taught himself to master and execute a series of complex tasks.

The study was published in the journal Nature.

"So far, capable of self-learning systems have been used only for relatively simple problems. For the first time in the world have used such a system in an environment rich perceptive to perform tasks that are doable for people," Demis Hassabis said, vice president of engineering at the DeepMind.

Information technology companies are investing heavily in artificial intelligence. In 2014, Google bought DeepMind Technologies for over 618 million dollars.

Computers have been able to play complex games in the past. Deep Blue, a computer developed by IBM, able to play chess, defeated world champion Garry Kasparov in a match held in 1997.

However, Deep Blue was preprogrammed with a sort of instruction manual which provided the necessary expertise to excel at chess.

DeepMind computer uses a program, which the company defines as an "agent", which contains only basic information prior to offer a video game to play.

"The only information that I have provided a system were the pixels on the screen and the idea that we need to get a high score.

The rest had to solve alone, "said Hassabis.

Researchers have presented 49 computer video game of some classics as Space Invaders and Pong, boxing and tennis games created 3D platform game Atari and Enduro racing.

In 29 of the games, computer system performance were comparable or better than those of a human player experience. If video games Pinball, Boxing and Breakout, its performance have surpassed those of a human professional, but had bad results if Pac-Man game, and Montezuma's Revenge Private Eye.

"On one hand, it seems awkward in the sense that we are talking about some games of the 1980s and solutions can be written very quickly to play faster and better games. But what is important is that we have a system that can learn what to do based only on what it perceives, ie pixels. The same system can play 49 different games without any preliminary programming. Basically, is offered a new game, a new screen and realizes only after a few hours game, what to do, "said Demis Hassabis.

The research is the latest success in "deep learning" (deep learning), which paves the way for smarter computers.

Scientists develop information programs which, like the human brain, may be exposed to huge amounts of data, such as sound and image, and then extract the information intuitive and useful templates.

These may include computers that scan millions of images and understand what are the things that look: a cat can identify, for example. This capability is very important for cars that can drive themselves, they need to be "aware" of what is around.

It can also be about systems that can understand human speech, which can be used in systems for sophisticated voice recognition and translation systems in real time.

"One of the problems that take place presently robotics industry, in factories, in fields such as robot elderly care or household robots, is that when these machines are used in the real world have to deal with the unexpected . You can not schedule in advance all situations that might happen. In a way, these machines require intelligence must be adaptable and able to learn themselves, "said Hassabis.

Some fear, however, that the manufacture of computers that can be smarter than humans can be dangerous.

Physicist Stephen Hawking said in December 2014 that the full development of artificial intelligence "could mean the end of the human race."

Source: mediafax
Share on Google Plus
    Blogger Comment
    Facebook Comment

0 comentarii:

Post a Comment