‘Process and metrics before tools for better verification’

By Chris Edwards |  No Comments  |  Posted: November 19, 2012
Topics/Categories: Blog - EDA  |  Tags: , , , ,  | Organizations:

Chip-design teams are running into problems with verification because they are focused too much on tools and not enough on processes, Mentor Graphics chief scientist Harry Foster explained today at the first of a series of Verification Futures seminars hosted by TVS in Europe this week.

Foster said the focus on tools has led in some cases to a “verification paradox” where investment in new techniques intended to improve efficiency has failed to deliver. For example, the results of a survey from last year indicated that SystemVerilog adoption has been rapid. “It’s the fastest growing HDL in our history,” he claimed.

Foster has just started to look at this year’s data: “The data I’m looking at shows the demand is continuing this trend.”

Alongside the rise in SystemVerilog adoption are “fairly large increases in code and functional coverage. This is where the paradox comes in. One thing we find is that companies say ‘we do this’. But that does not reflect what they do accurately. Very few places measure their successes even when they think they are adopting advanced techniques”.

Foster cited work at Cisco that looked at how its hardware design teams improved productivity through the adoption of new techniques. “Groups that focus on tools first and then look at process do not increase productivity,” he said. In fact, the study found cost increases of up to 9 per cent. Where the process came first, savings of up to 30 per cent were achieved.

Taking a leaf out of the book of Bill Deming, Foster cited metrics as one of the keys to ensuring that processes are working and delivering results. “Metrics provide visibility,” he said, running through a list of possible candidates, depending on what processes are being used. “I’m not advocating that you adopt these, just giving examples.”

Examples include code stability. “This is an interesting metric, to see if something is changing rapidly,” said Foster. “Another is verification effectiveness: why haven’t we found bugs with certain checkers?

“One I love is bug density. This has been used in software for practically ever. Typically, you will find 10 to 50 bugs per thousand lines of code, depending on factors such as concurrency. It is an interesting one to track: you should expect a similar bug density or ask ‘what is going on?'”

How quickly bugs are being found provides another angle of attack. “It’s interesting when it does level off, there are questions you can ask: is it wrong? Is it time to change strategy?”

Foster explained that new metrics are needed above the level of the code itself in SoC design. “New challenges are coming forth because of the amount of IP that’s used. Just 27 per cent of the average design is new. The rest is internal or external IP. In the past there was minimal interaction between these IP cores. But things got bad when we started to get interaction between these blocks.”

For example, a growing number of systems now rely on multiple interacting processors, linked using cache-coherency protocols. “There are state machines that require the entire SoC to be assembled.”

This need for complete assembly not only kills simulation speed it creates the need for new metrics to capture how bugs in the interaction between units within the design are uncovered and fixed.

Comments are closed.

PLATINUM SPONSORS

Synopsys Cadence Design Systems Siemens EDA
View All Sponsors