AI’s design speedups, with and without machine learning

By Chris Edwards |  No Comments  |  Posted: June 14, 2021
Topics/Categories: Blog - EDA  |  Tags: , , , , ,  | Organizations: , , ,

AI hardware could help dramatically accelerate analog and digital design and not all of it directly through machine learning. Researchers at a workshop at the VLSI Symposia (June 13, 2021) described work that spans a range of applications, ranging from automated transistor sizing to macro placement based on machine learning that has already been used in the design of Google’s next generation of TensorFlow accelerators.

Young-Joon Lee, physical design engineer at Google, said machine learning could transform the way systems are designed as he described the work on automated macro placement published in Nature last week (June 10, 2021). “The reason we think a learning-based approach can be superior is that the system can learn the underlying relationship between the context and the target optimization metrics and leverage it to explore various optimization tradeoffs. Learning-based methods can gain experience as they solve more instances of the problem.”

After a series of refinements to tune performance and make it better able to generalize on different types of design, Googles’s reinforcement-learning system could in its first attempt achieve results that were not as good as a commercial placer. But after learning the characteristics of the circuitry in 24 hours of processing could achieve a quality of results similar to those of a human team that took six to eight weeks to iterate multiple times to converge to a similar result and with just under 3 per cent less overall wire length.

Inspirations for manual design

The results tend to look quite different to typical manual placement, which is often more rectilinear and symmetrical, though Google blurred out the details in the presentation. “Physical designers commented on the half-circle placement of standard cells around the macros helped minimize the wire length,” Lee said, adding that the human team working on the v4 TensorFlow processing unit (TPU) adopted similar layouts to those generated by the MLPlacer software after they had seen them. They were also able to improve on the results of the automated placer. “Users took macro placements from MLplacer and rearranged them a bit to improve worst negative slack.”

Though blurred, Google's manual and AI placements show distinct differences

Image Though blurred, Google's manual and AI placements show distinct differences

There is strong competition from more traditional methods. The Google method uses deep reinforcement learning to place macros but for standard cell placement around the macros, the experimental software uses a more conventional analytic solver.

“The reason we did this is because macro placement is more challenging. Standard cells are small and existing analytical approaches produce good results for them,” Lee said.

David Pan, professor in electrical engineering at the University of Texas at Austin, said the DREAMplace tool his team developed uses AI hardware rather than AI techniques to speed up cell placement, taking advantage of the way both machine learning and more traditional algorithms revolve around optimization. He pointed out the similarities between the linear algebra used in the forward propagation of neural-network training and the calculations used by analytic solvers developed for placement such as the RePlAce software developed by Professor Andrew Kahng’s team at the University of California at San Diego, which forms part of DARPA’s OpenROAD Project portfolio.

No training required

“By doing this, we don’t need any training data: we are using the training structures provided by deep learning’s hardware and it’s software toolkits,” Pan explained. “We run on a GPU and rewrite the code using deep-learning toolkits to get the same quality of results as RePlAce but with a speedup of 40 times. We can use the same paradigm to solve other EDA problems.”

For other problems, Pan’s group has exploited aspects of machine learning. For a lithographic predictor, they used a generative adversarial network (GAN) to provide result almost two thousand times faster than is possible with conventional simulation. He said following discussions with industry, the errors in the results compared to simulation seem acceptable.

Where machine learning is used, a common theme lies in the use of graph neural network (GNN) topologies. These use embedding techniques analogous to those used in neural-network language processing where words are represented as high-dimensional vectors so the network can cluster similar entities and handle them more easily. GNNs instead use graphs representing the nodes and edges of subcircuits to generate the embedded vectors. Song Han, assistant professor at Stanford University, said: “We leverage graph neural networks inspired by the idea that a circuit is a graph.”

Haoxing Ren, principal research scientist at nVidia described several projects that revolve around the use of GNNs coupled with Bayesian optimization techniques to handle parasitics predictions more quickly than simulation as well as transistor sizing and standard-cell library layout creation. The cell-creation system NVCell can build almost complete libraries, reducing the size of just over 10 per cent of the cells compared to a manual library.

Comments are closed.

PLATINUM SPONSORS

Synopsys Cadence Design Systems Siemens EDA
View All Sponsors