Man and machine

Posted by on March 16, 2016 1:51 pm
Tags:
Categories: Dialogue Media

March 2016 saw the end of the historic match between Lee Sedol, one of the world’s best Go players, and AlphaGo, an artificially intelligent system designed by a team of researchers at DeepMind, a London AI lab now owned by Google. The machine claimed victory in the best-of-five series, winning four games and losing only one. It marked the first time a machine had beaten the very best at this ancient and enormously complex game—a feat that, until recently, experts didn’t expect would happen for another ten years.

The victory is notable because the technologies at the heart of AlphaGo are the future. They’re already changing Google and Facebook and Microsoft and Twitter, and they’re poised to reinvent everything from robotics to scientific research. This is scary for some. The worry is that artificially intelligent machines will take our jobs and maybe even break free from our control—and on some level, those worries are healthy. We won’t be caught by surprise.

Originally, Silver and his team taught AlphaGo to play the ancient game using a deep neural network—a network of hardware and software that mimics the web of neurons in the human brain. This technology already underpins online services inside places like Google and Facebook and Twitter, helping to identify faces in photos, recognize commands spoken into smartphones, drive search engines, and more. If you feed enough photos of a lobster into a neural network, it can learn to recognize a lobster. If you feed it enough human dialogue, it can learn to carry on a halfway decent conversation. And if you feed it 30 million moves from expert players, it can learn to play Go.

It understands how humans play, but it can also look beyond how humans play to an entirely different level of the game.

But then the team went further. Using a second AI technology called reinforcement learning, they set up countless matches in which (slightly) different versions of AlphaGo played each other. And as AlphaGo played itself, the system tracked which moves brought the most territory on the board. “AlphaGo learned to discover new strategies for itself, by playing millions of games between its neural networks, against themselves, and gradually improving,” Silver said when Google unveiled AlphaGo early this year.

Then the team took yet another step. They collected moves from these machine-versus-machine matches and fed them into a second neural network. This neural net trained the system to examine the potential results of each move, to look ahead into the future of the game.

So AlphaGo learns from human moves, and then it learns from moves made when it plays itself. It understands how humans play, but it can also look beyond how humans play to an entirely different level of the game.

 The symmetry of these two moves is more beautiful than anything else. One-in-ten-thousand and one-in-ten-thousand. This is what we should all take away from these astounding seven days. Hassabis and Silver and their fellow researchers have built a machine capable of something super-human. But at the same time, it’s flawed. It can’t do everything we humans can do. In fact, it can’t even come close. It can’t carry on a conversation. It can’t play charades. It can’t pass an eighth grade science test. It can’t account for God’s Touch.

But think about what happens when you put these two things together. Human and machine. Fan Hui will tell you that after five months of playing match after match with AlphaGo, he sees the game completely differently. His world ranking has skyrocketed. And apparently, Lee Sedol feels the same way. Hassabis says that he and the Korean met after Game Four, and that Lee Sedol echoed the words of Fan Hui. Just these few matches with AlphaGo, the Korean told Hassabis, have opened his eyes.

 

From Wired

Filter by
Post Page
UpLoadedMe ken Wanderings Mind
Sort by