Formal engines learn from experience

By Chris Edwards |  No Comments  |  Posted: May 8, 2019
Topics/Categories: Blog - EDA  |  Tags: , , ,  | Organizations: ,

Machine learning is coming to formal verification as vendors explore ways to make their products easier to use for non-formal specialists.

At CDNLive EMEA this week, Cadence Design Systems launched what it called its third generation of the JasperGold formal-verification suite. The product adds several mechanisms based on various forms of artificial intelligence (AI) for automating the job of picking and scheduling the engines that are deployed based on how their RTL is structured.

A few weeks earlier at a seminar organised by TV&S on machine learning in EDA in April, formal specialist OneSpin Solutions described its own work on using forms of supervised learning to improve the ability of tools to detect which engines need to be deployed on RTL and when. As an early user of the new version, STMicroelectronics said it saw an average doubling in throughput with the solver-inference feature, which determines which engines fit the design and task best.

Pete Hardee, marketing director for Cadence, says the out-of-the-box engine is based on a supervised-learning model derived from 500 designs. “The training sequence is based on customer designs but there is no way to reverse-engineer those designs. Solver inference finds the most relevant solver and how to parameterise it based on these inputs: what kinds of assertions are used; how big the design is based on state space and cone on influence; and what kind of design it is, whether it’s control or data dominated.”

Out-of-the-box model

Although the solver-inference engine is based on a core model supplied by Cadence, it is intended to be tuned over time on a customer’s own design database so that it can more accurately reflect the design styles and assertion usage of the in-house teams.

A second engine makes use of a mixture reinforcement learning and input from human experts to determine how best to deploy engines in parallel given the available collection of compute resources, as well as when to give up on a particular solver and move to another because it is taking too long. “This gets better results out of the box [than without] but it makes a bigger difference when run again and again.”

Like Cadence, OneSpin’s work found that solver orchestration based on machine learning is effective. The company is armed with more than two decades worth of collected designs that extend as far back as the team’s original work at Siemens in the early 1990s.

Dominik Strasser, vice president of engineering at OneSpin, says with the machine-learning functions: “Users do not normally have to pick proof algorithms and we now have a system that has faster proofs for time-to-hold,” adding that time-to-fail has rarely been an issue.

However, using machine learning to predict runtime can easily run into trouble. Using principal component analysis, the training set did show an ability to predict runtime but when deployed on designs from outside the training data the correlation dropped off.

Experts and machines

Hardee agrees that machine learning has problems with runtime analysis. “How reasonable it is to allow the engine to continue is not as predictable as with simulation. The engine can be working really hard and seemingly doing nothing but then make a breakthrough. It takes a lot of design experience to be able to make a judgment.”

Cadence’s approach is to use a more traditional expert-system approach to provide advice on runtimes and how to suggest other options when a solver seems to freeze. “An expert system can give realistic suggestions when a property gets stuck, such as trying an abstraction or getting rid of or remodelling a counter that’s extending the state space too far, so that it’s only counting a few bits rather than 2K.”

Hardee says there is scope to build machine learning on top of such an expert system. “Once the formal experts have set parameters, the machine-learning component can learn from that.”

As part of its ongoing work into applying AI to formal, a new project at OneSpin is aimed at helping with the debug process. “The tool finds out which part is the culprit. This is a very difficult problem to solve and a task for the future but we have some intermediate results. But don’t hold your breath,” Strasser says.

Another path OneSpin aims to pursue is using deep learning to analyse verification data. “Maybe then the runtime prediction will get better results,” Strasser hopes.

Comments are closed.

PLATINUM SPONSORS

Synopsys Cadence Design Systems Siemens EDA
View All Sponsors