Computer Science

How Google is Training Chips to Design Themselves

Credit: CC0 Public Domain

Researchers said that they found their new approach to artificial intelligent assembly line production to be superior to designs created by human engineers.

One of the key challenges of computer design is how to pack chips and wiring in the most ergonomic fashion, maintaining power, speed and energy efficiency.

The recipe includes thousands of components that must communicate with one another flawlessly, all on a piece of real estate the size of a fingernail.

The process is known as chip floor planning, similar to what interior decorators do when laying out plans to dress up a room. With digital circuitry, however, instead of using a one-floor plan, designers must consider integrated layouts within multiple floors. As one tech publication referred to it recently, chip floor planning is 3-D Tetris.

The process is time-consuming. And with continual improvement in chip components, laboriously calculated final designs become outdated fast. Chips are generally designed to last between two and five years, but there is constant pressure to shorten the time between upgrades.

Google researchers have just taken a giant leap in floor planning design. In a recent announcement, senior Google research engineers Anna Goldie and Azalia Mirhoseini said they have designed an algorithm that “learns” how to achieve optimum circuitry placement. It can do so in a fraction of the time currently required for such designing, analyzing potentially millions of possibilities instead of thousands, which is currently the norm.

In doing so, it can provide chips that take advantage of the latest developments faster, cheaper and smaller.

Goldie and Mirhoseini applied the concept of reinforcement learning to the new algorithm. The system generates “rewards” and “punishments” for each proposed design until the algorithm better recognizes the best approaches.

The notion of such reinforcement has roots in the school of psychology known as behaviorism. Its founder, John Watson, famously suggested all animals, including humans, were basically complex machines that “learned” by responding to positive and negative responses. How surprised Watson would be to learn that principles he first articulated in 1913 are more than a century later being applied to “intelligent” machines as well.

Google researchers said that after extensive testing, they found their new approach to artificial intelligent assembly line production to be superior to designs created by human engineers.

“We believe that it is AI itself that will provide the means to shorten the chip design cycle, creating a symbolic relationship between hardware and AI, with each fueling advances in the other,” the designers said in a statement published on arxiv.org, a repository of scientific research managed by Cornell University.

Computer circuitry has come a long way since the first “all-electronic calculating machine”—ENIAC—was unveiled in 1945. Jam packed with 18,000 vacuum tubes, the precursors to integrated circuits and computer chips, and miles of wiring, the massive $6 million machine stretched as wide as three commuter buses, weighed 30 tons and took up an entire room of the Princeton University lab where it was created.

Today’s iPhones feature chips the size of a pinky fingernail that are 1,300 times more powerful, 40 million times smaller and 1/17,000 the cost of the ENIAC.

Google’s new algorithm may also help ensure the continuation of Moore’s Law, which states the number of transistors packed into microchips doubles every one or two years. In 1970, Intel’s 4004 chip housed 2,250 transistors. Today, the AMD Epyc Rome hosts 39.5 billion transistors.

Which leaves plenty of possibilities for Google’s new room design algorithm.

Source: Peter Grad , Tech Xplore

SHARE
RELATED POSTS
Using Music to Manage Networks
cámaras de teléfonos inteligentes
A Lens That is a Thousand Times Thinner than a Human Hair Has Been Developed
Educators: More Computer Science Classes Needed

Comments are closed.