Skip to main content

Can A.I. beat human engineers at designing microchips? Google thinks so

Circuit board with microchips

Could artificial intelligence be better at designing chips than human experts? A group of researchers from Google’s Brain Team attempted to answer this question and came back with interesting findings. It turns out that a well-trained A.I. is capable of designing computer microchips — and with great results. So great, in fact, that Google’s next generation of A.I. computer systems will include microchips created with the help of this experiment.

Azalia Mirhoseini, one of the computer scientists of Google Research’s Brain Team, explained the approach in an issue of Nature together with several colleagues. Artificial intelligence usually has an easy time beating a human mind when it comes to games such as chess. Some might say that A.I. can’t think like a human, but in the case of microchips, this proved to be the key to finding some out-of-the-box solutions.

Recommended Videos

Designing a microchip involves “floor planning,” a lengthy process that involves the work of human experts with the help of computer tools. The goal is to find the optimal layout for all the subsystems on a chip, thus providing the best possible performance. Minuscule changes to the placement of each component can have a massive impact on how powerful the chip is going to be, be it a processor, a graphics card, or a memory core.

Google’s engineers admit that designing floor plans for a new microchip takes “months of intense effort” for a whole team of people. However, Google Research’s Brain Team based in Mountain View, California, seems to have cracked the code that makes the whole process simpler. The answer? Treating floor planning as a game.

As reported by Azalia Mirhoseini and Anna Goldie, both co-leaders of the research team, the A.I. was trained to play a game of finding the most efficient chip design. Using a dataset of 10,000 microchip floor plans, the team used a reinforcement learning algorithm to set apart the good and the bad floor plans. Metrics such as the length of wire, power usage, chip size, and more were taken into consideration.

The more the A.I. was able to discern the most optimal chip configurations, the more it was also able to produce its own. In the process, it found some unique approaches as to the placement of parts. This has worked as an inspiration for the experts to try something new, such as reducing the distance between the components by placing them in doughnut shapes.

Although previous attempts of simplifying the process have been made, five decades worth of research hasn’t brought any solutions. Until now, all automated planning techniques were unable to replicate the kind of performance human-made chips provided.

According to Anna Goldie, this is because the algorithm learns from experience. “Previous approaches didn’t learn anything with each chip,” said Goldie, pointing out the use of machine learning.

What used to take a team of experts several months can now be replicated by artificial intelligence in under six hours. The resulting microchip floor plans are either of the same quality as those made by humans or, in some cases, superior to them. As such, Google’s new findings could save hundreds, if not thousands, of work hours for each new generation of computer chips.

The company is now using these A.I.-made chips for further studies. The scientists suggest that the use of these more powerful chips may contribute to further advances in the research, including the use of A.I. for things such as vaccine testing or city planning. As A.I. becomes more and more widespread, there will certainly be even more big discoveries to watch out for in the near future.

Monica J. White
Monica is a UK-based freelance writer and self-proclaimed geek. A firm believer in the "PC building is just like expensive…
Oops — Google Bard AI demo is disproven by the first search result
A Google blog post discussing its LaMBDA artificial intelligence technology displayed on a smartphone screen.

These are heady days if you’re following the world of artificial intelligence (AI). ChatGPT is taking over the world, Microsoft is adding its tech to Bing, and Google is working on its own AI called Bard.

Except, Bard might not quite be ready for prime time -- and Google just proved it during its own tech demonstration. Oops.

Read more
Zoom’s A.I. tech to detect emotion during calls upsets critics
coronavirus crisis not ready for an online first world analysis zoom conference lifestyle image

Zoom has begun to develop A.I. technology which can reportedly scan the faces and speech of users in order to determine their emotions, which was first reported by Protocol.

While this technology appears to still be in its early phases of development and implementation, several human rights groups project that it could be used for more discriminatory purposes down the line, and are urging Zoom to turn away from the practice.

Read more
Analog A.I.? It sounds crazy, but it might be the future
brain with computer text scrolling artificial intelligence

Forget digital. The future of A.I. is … analog? At least, that’s the assertion of Mythic, an A.I. chip company that, in its own words, is taking “a leap forward in performance in power” by going back in time. Sort of.

Before ENIAC, the world’s first room-sized programmable, electronic, general-purpose digital computer, buzzed to life in 1945, arguably all computers were analog -- and had been for as long as computers have been around.

Read more