DVCon Europe explores pitfalls and possibilities of AI for verification

By Chris Edwards |  No Comments  |  Posted: October 27, 2021
Topics/Categories: Blog - EDA, IP  |  Tags: , , ,  | Organizations: , , , , ,

In a panel at this week’s DVCon Europe (October 27th, 2021), experts described a number of issues facing teams looking to incorporate machine learning in logic verification flows and why some of those efforts will not pay off while others succeed.

Verification at first seems an ideal candidate for the kinds of machine learning algorithms that are available today. It can throw up enormous quantities of data and there are many tasks that involve looking for patterns of behavior.

Tushit Jain, senior director of machine learning at Qualcomm, said the company’s engineers have been developing ways to improve coverage closure using the technologies. In the wider market, machine learning is showing promise having been used in some cases to help streamline regression tests as well as to focus formal verification tools on algorithms that suit particular types of logic block. And at the physical level, machine learning is becoming an accepted tool for characterising cell libraries more quickly than brute-force simulation.

John Rose, product engineering group director at Cadence Design Systems, explored a number of the apparent possible applications for machine learning in verification. “Can we help engineers categorize problems and help root-cause them more easily and quickly? Machine learning can help you as a user to direct the tool. If I’m getting bugs in one part of the design and I don’t seem to be getting enough representation in this area, can machine learning help me to address the corner cases that will expose more failure signatures?”

Debug priorities

Triage for debugging is an area where Verifyter has focused its attention. Daniel Hansson, founder and CEO of the tools supplier explained how the tool performs bug prediction on commits to a code base. “It does a risk analysis of them and gives a number to each commit to estimate how likely it is to contain a bug. Rather than treating all commits as equals, you can focus on the top five that might be responsible for a bug. It lets you go through debugging process much faster.”

In many cases, there remains a sizeable gap between the promise and reality, panelists noted. The data produced by logic verification tools is not necessarily useful data and it might not be nearly as big as it seems. In principle, you could get a model to learn about blocks in a design that seem to be more troublesome than others. But as Renesas principal digital verification engineer Darren Galpin pointed out, often much of the logging that might be performed by a simulator or emulator is switched off to help reduce runtimes.

Raviv Gal, manager for hybrid cloud quality technologies at IBM Research identified a number of issues with trying to apply machine learning to logic verification. “Hardware verification is a mature area. We already have mature tools and mature methodologies: these are hard to beat. Also, there are no public open benchmarks to measure progress, such as those we have in the formal-verification community.”

There are fundamental issues with the nature of the data itself. A particular problem for many of the current machine-learning technologies, Gal said is their need for data that has been labelled carefully to indicate what it means. Much of the data that tools generate, such as logs, do not have useful labels for training. He pointed to one situation where in an experiment to find patterns in simulation logs, the model effectively just reverse-engineered the names of the tests.

Data availability

A further problem is that the data is changing constantly as the project progresses and the design and its testbenches are refined. Galpin also pointed out that the data might not be available in a useful form, “Tools need to be a lot more interoperable. To gain from machine learning I need to be able to share data from all those tools.”

Galpin said there are potential issues with optimising for goals such as coverage efficiency, noting that functional coverage is often most important when you discover the parts of the design that have not been hit for whatever reason. “If you do apply machine learning, we saw on the project I was involved with that you can improve time to hit coverage significantly. The drawback with that is that if you run fewer simulations you get a loss of the serendipitous hits to parts of the design that are not well covered. And it’s the parts of the system you haven’t considered where many of the bugs lie.”

Acceptable AI

Acceptance of AI-based techniques is an issue for any team. “It’s OK to use machine learning to propose a way of completing a sentence. But it’s not OK in our case to potentially accuse individual engineers of creating bugs,” says Hansson. One mitigation for that, he added, is to use filtering to weed out clear false positives in a analyzer for triaging problems. Another is to set expectations on what the tools can do and what the results mean: prioritizing operations is different to telling engineers to fix a bug they may have no hand in.

“Don’t pick problems where you want 100 per cent accuracy,” Jain said. “You want to assist designers, not replace them.”

Galpin added, “It’s like weather forecasting: you want to prepare people for the idea that it will get something wrong. If that’s the case, they are more likely to accept it.”

Some of the existing tools that benefit from machine learning have just been hiding. Adnan Hamid, CEO and CTO of Breker Verification Systems, pointed to the techniques used in the company’s verification planners. “When we started we didn’t call it AI because AI wasn’t cool then,” he said.

“My belief is that there is no other way than a hybrid approach. We need to combine domain knowledge with AI to boost performance,” Gal said, pointing to an example of a system IBM developed for coverage-directed test generation after other approaches ran into a brick wall. “What helped us break the wall is to define the problem as an optimization problem. We boosted a classic optimization algorithm with what we call a lightly trained deep neural network,” Gal said. It was a very small model by DNN standards but trained in such a way that it could identify improvements in coverage for a given number of cycles compared to conventional constrained random verification.

”I think ML can be a performance booster for the next generation of verification but it’s not the only driver. It’s a hybrid model that’s the future.”

Comments are closed.

PLATINUM SPONSORS

Synopsys Cadence Design Systems Siemens EDA
View All Sponsors