Skip to main content

Google outraces Facebook to AI breakthrough by beating a Go champ

Games have always been a preferred domain for artificial intelligence developers to test their mettle. The fixed, rule-bound systems of games allow for a clean environment in which a focused AI can take on a human counterpart with some objective measure of relative success. Now a team out of Google has passed another important milestone in the history of AI gaming, creating the first system to defeat a professional player of the ancient Chinese game, Go.

Starting with tic-tac-toe in 1954, and then checkers in 1994, computers have been steadily working their way through increasingly complex games, matching and then surpassing the best that humanity has to offer. Chess was long held as a bastion of human intellect that was too subtle for computers to master until 1997 when IBM’s Deep Blue notoriously defeated Garry Kasparov, one of the greatest players in chess history. More recently, IBM racked up another success when its Watson defeated two Jeopardy champions in 2011. Google made headlines last year with a generalized AI that was able to successfully teach itself over a dozen Atari games just based on pixel input.

Go has long been a holy grail for AI researchers due to its combination of relatively simple rules and immense strategic complexity. Originating in China over 2,500 years ago, Go has amassed millions of devoted players, and is considered a high intellectual pursuit, particularly in Japanese and Chinese culture. Players alternate placing black or white stones on a grid with the goal of capturing one another’s pieces or fully surrounding sections of the board for points. The rules are straight forward, but because players can place stones anywhere on the board, the game has 1 x 10^127 possible states. That’s more than the number of atoms in the known universe, and many orders of magnitude more than the number of possible chess positions.

Traditional AI solutions to games involve using search trees to run through possible ways that the game could play out, based on the current game state, in order to make the most informed decision. This brute force method, leveraging computing strength to run through more possibilities than an intuition-reliant human could, has always been completely insufficient in the face of Go‘s open-ended complexity.

AlphaGo went 5 and 0 against Hui, marking the first time that a computer program has ever bested a professional Go player.

Google’s team instead relied on neural networks, an approach to intelligent systems that runs inputs through layers of virtual neurons that loosely mimic animal brain function. The result is measured against a desired goal, and then connection strengths within the networks are tweaked. Through repetition this allows for systems that dynamically “learn,” arriving at solutions and strategies that were never directly programmed in. AlphaGo, Google’s system, comprised 12 neural network layers, including a “policy network” that selected a move after the board state was run through the other layers, and a “value network” that predicts the winner based on a given move.

30 million moves from human expert games were run through the network until it could successfully predict human moves 57 percent of the time (over the previous 44-percent record). Wanting to do more than just mimic human players, AlphaGo was then sent to play thousands of games against itself, developing its own, non-programmed strategies by adjusting connections and reinforcing decisions that led to victories, relying on the Google Cloud Platform for the necessary computing oomph. More technical nitty-gritty on how AlphaGo developed can be found in an article published by the team in Nature.

AlphaGo was then put to the test. First it took on the reigning top Go computer programs, winning all but one out of 500 games. Then came the real test, challenging three-time European Go champion Fan Hui. Behind closed doors last October, AlphaGo went 5 and 0 against Hui, marking the first time that a computer program has ever bested a professional Go player.

Coincidentally, Facebook also just announced its efforts to tackle Go with artificial intelligence in a public post from founder Mark Zuckerberg. Although Facebook has apparently made substantial progress in the last year, Google appears to have beaten them to the punch by declaring AlphaGo’s victory over Fan Hui. It may be all fun and games for now, but tackling challenges like Go that were previously thought insurmountable has larger implications for the progress of connectionist AI and machine learning, which have the potential to become extremely powerful tools for analyzing messy, real world problems.

Editors' Recommendations

Will Fulton
Former Digital Trends Contributor
Will Fulton is a New York-based writer and theater-maker. In 2011 he co-founded mythic theater company AntiMatter Collective…
AI breakthroughs could come via the brains of bees, scientists say
ai bee brain opteran

The brains of bees could help to take AI systems to the next level, according to scientists in the U.K.

The team at the University of Sheffield has conducted a study that it says reveals the underlying mechanisms that drive the creatures' ”remarkable” decision-making capabilities, which could be transferred to AI technologies, the BBC reported.

Read more
OpenAI building new team to stop superintelligent AI going rogue
A digital brain on a computer interface.

If the individuals who are at the very forefront of artificial intelligence technology are commenting about the potentially catastrophic effects of highly intelligent AI systems, then it's probably wise to sit up and take notice.

Just a couple of months ago, Geoffrey Hinton, a man considered one of the “godfathers” of AI for his pioneering work in the field, said that the technology's rapid pace of development meant that it was “not inconceivable” that superintelligent AI -- considered as being superior to the human mind -- could end up wiping out humanity.

Read more
All of the internet now belongs to Google’s AI
ChatGPT versus Google on smartphones.

Google's latest update to its privacy policy will make it so that the company has free range to scrape the web for any content that can benefit building and improving its AI tools.

“Google uses information to improve our services and to develop new products, features, and technologies that benefit our users and the public,” the new Google policy says. “For example, we use publicly available information to help train Google’s AI models and build products and features like Google Translate, Bard, and Cloud AI capabilities.”

Read more