Artificial intelligence programs have progressed from learning by interacting with humans in games like chess,Jeopardy and GO, to learning from itself. Interacting with other A.I. programs. We have seen this with video processing software from Nvidia that is capable of creating fake photo’s and video to self driving vehicles.
Google’s Deepmind project recently turned loose a new version of their A.I. GO playing software called AlphaGo Zero.
The game of Go is widely viewed as an unsolved “grand challenge” for artificial intelligence. Games are a great testing ground for inventing smarter, more flexible algorithms that have the ability to tackle problems in ways similar to humans. The first classic game mastered by a computer was noughts and crosses (also known as tic-tac-toe) in 1952.
But until now, one game has thwarted A.I. researchers: the ancient game of Go.
About the game of GO.
Go is an abstract strategy board game for two players, in which the aim is to surround more territory than the opponent.
The game was invented in ancient China more than 2,500 years ago and is believed to be the oldest board game continuously played today It was considered one of the four essential arts of the cultured aristocratic Chinese scholars in antiquity.
The latest evolution of AlphaGo, the first computer program to defeat a world champion at the ancient Chinese game of Go. Zero is even more powerful and is arguably the strongest Go player in history. In this graph below you can see the power of deep learning programs as the algorithms learn from experienced players until they surpass what it humanly possible.
Previous versions of AlphaGo initially trained on thousands of human amateur and professional games to learn how to play Go. AlphaGo Zero skips this step and learns to play simply by playing games against itself, starting from completely random play. In doing so, it quickly surpassed human level of play and defeated the previously published champion-defeating version of AlphaGo by 100 games to 0.
According to Google’s Deep MInd Project:
It is able to do this by using a novel form of reinforcement learning, in which AlphaGo Zero becomes its own teacher. The system starts off with a neural network that knows nothing about the game of Go. It then plays games against itself, by combining this neural network with a powerful search algorithm. As it plays, the neural network is tuned and updated to predict moves, as well as the eventual winner of the games.
This updated neural network is then recombined with the search algorithm to create a new, stronger version of AlphaGo Zero, and the process begins again. In each iteration, the performance of the system improves by a small amount, and the quality of the self-play games increases, leading to more and more accurate neural networks and ever stronger versions of AlphaGo Zero.
This technique is more powerful than previous versions of AlphaGo because it is no longer constrained by the limits of human knowledge. Instead, it is able to learn tabula rasa from the strongest player in the world: AlphaGo Zero itself.
If we consider that this type of A.I. may be able to accomplish in the area of health care or climate change or even space exploration it is hard not be excited about the prospects right? But…. do we really want to hand over our place at the top of the food chain to machines? Are we sure we know what we are doing?