EDA learns to love AI

By Chris Edwards |  1 Comment  |  Posted: June 26, 2018
Topics/Categories: Blog - EDA  |  Tags: , , , ,  | Organizations: , , ,

Although the rise of AI chipmaking startups has attracted the most attention, the slow penetration of machine learning into electronics design continues at this year’s Design Automation Conference (DAC). A number of suppliers are also combining AI with the other theme of this year’s event: the expansion into cloud computing.

At a lunchtime session organized by Synopsys focused on the company’s Fusion implementation tools, Michael Jackson, senior vice president in the design group, said the overall strategy for the near term is to use machine learning across a number of tools to speed up analysis.

As part of a rollout of several products designed to run in the cloud, Cadence Design Systems said it has applied machine learning to library characterization. The use of machine learning for this task is an example of how the technique is likely to evolve, at least in its early phases: to speed up the process by identifying the most likely hotspots that need detailed analysis, by learning from prior runs.

Simulation reduction

The simulation of libraries and standard cells needs to take into account many more design corners than used to be the case, because of the growing effect of temperature and process effects. It also relies increasingly on statistical analysis techniques. This consumes a large (and growing) number of machine cycles.

Cadence refers to its use of machine learning in library characterization as “smart interpolation”: using learned heuristics to identify the most important design corners that need characterization.

Seena Shankar, senior principal product manager in the custom IC and PCB group at Cadence, said the tool “leverages clustering techniques to identify and predict critical corners based on a handfull of cells from the library. The selection of critical voltage corners for characterization across a range of voltages is done by keeping temperature and process constant.”

Typically, users provide the voltage ranges and pick the cells upon which they want to base the analysis, Shankar explained. “Liberate Trio generates simulation data across the range of voltages provided and decides the critical points for characterization. The level of supervision needed is minimal.”

Ron Moore, vice president of business planning in Arm’s physical design group, said the team “saw a notable improvement in turnaround time using the same number of CPUs”.

A little over a year ago, Solido Design Automation, which was acquired by Mentor, A Siemens Business in late 2017, launched a library characterization tool as part of a long-term program of development based on machine learning. One tool, used for high-sigma Monte Carlo analysis, involved 40 person years of effort since the initial version was developed at Solido a decade ago.

AI targets in EDA

In a session at DAC intended to show how EDA tools can make use of AI, Jeff Dyck, director of engineering at Mentor, laid out the characteristics that make certain applications suitable for AI.

“You look for things that are CPU-bound,” Dyck said, where brute-force simulation of all parameters would tie up a large number of licenses and machines but where heuristics would lead to much faster results. Being on the critical path increases the pressure to reduce the simulation times. “That looks like a great machine-learning problem,” he added.

A key difference between EDA and other sectors that want to apply machine learning is the nature of the data. ”We don’t try to take a bunch of historic data and learn from that. We collect data on the fly so that it runs on an adaptive machine-learning loop.” The generator of the training data is often a simulator: “Usually a SPICE simulator that we use in our case,” Dyck said.

However, analog design is far from being the only area in EDA that is amenable to machine learning. Synopsys has been looking closely at using machine learning to build heuristics that help speed up the operation of formal verification.

Non-obvious answers

In FPGA design, Singapore-based Plunify has used the combination of cloud computing and machine learning to tighten up timing. The InTime tool learns which implementation settings work for different types of logic from its results with previous RTL compilations.

Plunify co-founder and COO Kirvy Teo said: “In terms of what can be done, we have seen designs pass timing from as bad as 2ns worst negative slack. The results have surprised even ourselves. Usually what happens is that, when we get a design, the user has tried all the aggressive settings that they can get their hands on. Usually it doesn’t get the best results. But we have found groups of settings that work better. If you set everything to be aggressive the software churns without getting good results.”

The learned attributes can produce results that seem counter-intuitive, but which work. For example, FPGA designers would typically expect to use DSP blocks wherever possible. The tool learned that using settings which limit the number of hard-wired DSPs used in a design can result in better results, which may be due to a reduction in wiring congestion made possible by not having to find routes to and from distant DSP blocks.

Dyck said that the use of machine learning in tools tends to lead to initial resistance from designers, although the ability to deliver answers faster is breaking down that problem.

“Supporting these tools is very different. A key difficulty is proving that an answer is correct. Verifiability and trust are the trickiest things. How do you prove they are right without running the actual simulations?” Deck said. ” You need trust. If people don’t trust them, they won’t use them.”

Comments are closed.

PLATINUM SPONSORS

Synopsys Cadence Design Systems Siemens EDA
View All Sponsors