Doc Formal: the crisis of confidence facing verification III

By Ashish Darbari |  1 Comment  |  Posted: December 5, 2017
Topics/Categories: EDA - Verification  |  Tags: , , , , , , ,  | Organizations:

Ashish Darbari is CEO of formal verification consultancy Axiomise.Ashish Darbari is CEO of formal verification consultancy and training provider Axiomise.

How to Optimize the Verification Meta Model II

In the first two parts of this series, I described the verification crisis, explained how it came about, and began to describe the pillars and, within them, components of a responsive design verification (DV) flow.

Part One defined a verification meta model as a way to describe the key aspects of the crisis and laid out high level ideas for alleviating it.

Part Two considered two of the four main pillars of an optimized DV flow: ‘Requirements and Specifications’ and ‘Verification Strategy and Plan’.

Both are best read before moving on to this third and final part, which considers the two other supporting pillars in that flow: ‘Debug’ and ‘Signoff and Review’.

Debug represents the biggest contributor to verification cost, while Signoff and Review is the most highly valued item in a successful flow.

As before, this review is intended to give you a checklist of items and techniques that you should incorporate within your DV flow.

Design verification flow II

Debug

1. What debug process is being used in the project?

Does your debug process mandate the use of best-in-class debuggers but expect engineers to know how to drive them without providing additional training?

Debug is often estimated to consume about 70% of the overall design verification time. So, while the selection of a particular debugger may be a matter of personal choice—and the world will always be divided over which is the best debug tool — what is not debatable is that DV engineers need good hands-on training in debugging.

Shortening the debug cycle depends not only on being able to use the debugger, but also on the engineer’s knowledge of what is being debugged, as well as his or her familiarity with specifications and the design intent.

Nevertheless, being able to go from a point of failure to the root cause analysis is a skill that can be taught. Too often, when I hear from management that their engineers take too long in debug, staff actually performing the task respond by pointing to a lack of training in or familiarity with the debugger.

2. Do your engineers know what to do when a failing debug trace is found?

When is a failing trace a real design bug? What is the process that a verification engineer needs to follow to establish this? This response process is often left unspecified in a verification program.

In the good old days, when design and verification engineers worked in the same office, this was less of an issue. The designer would either be verifying his or her own design or could simply look over the shoulder of the DV engineer. The whole debug process is a lot more challenging today: both designs and test benches have become complex and the design and verification teams are often thousands of miles apart in different time zones.

A good debug process should therefore establish some ground rules. One is a process whereby a failure is analyzed locally and step-by-step by the DV engineer, largely to ensure that it is not due to missing constraints. The failure can then, if necessary, be manually triaged and reviewed by a senior verification lead to determine whether the failing trace is likely to be a design bug and should therefore be assigned to a designer for review.

I have often found that just picking up the phone and discussing such a failure with the designer is healthy. It saves time on both sides, and provides valuable insights toward adequate constraining of the design. This is especially true when specifications are not provided up-front or are ambiguous.

Moreover, simply talking has another additional benefit. It fosters respect on both sides of the design-and-verification line. By promoting an atmosphere where different disicplines interact freely and work as a single team toward finding the root cause of a failure, the focus remains on solving the problem, rather than playing the blame game.

Signoff and Review

1. Do you have a process of reviewing verification milestones?

Most people would agree that there are primarily two ways of closing out verification. One is anecdotal, done by bug tracking. Another is analytical, done by computing coverage metrics.

We will look at both of these later, but I want first to highlight a third way of assessing verification closure, through a review process. I call this a ‘process’ because you need to apply it across all aspects of design and verification, not during verification only. Here’s how such a process can work.

A design review process describes how often the design code is manually reviewed. A verification review process summarizes the key verification milestones and a process for reviewing five other elements: the verification strategy, the verification plan, the verification code, coverage results, and bug tracking reports.

Each review is defined by three characteristics: When it should carried out and by whom; A templated list of questions that needs to be answered by the verification engineer; An independent review—ideally not by a designer or the verification engineer directly involved in conducting the original tasks but someone who still has the knowledge to assess the overall status of verification.

2. Do you have everything necessary to ensure good coverage?

There are several ways of calculating coverage metrics. You start with basic structural coverage that includes code coverage, toggle coverage, and FSM coverage. At the other extent, these stretch to more complex, but also more meaningful functional coverage (also often defined as ‘semantic coverage’).

We primarily think of ‘coverage’ as a means of analyzing a design to understand what parts of its functionality have been covered by the verification. However other meaningful forms can relate to the well-formedness of the design are also valuable. These include the structural coverage metrics that are part of the metric-oriented view.

Though these metrics are a fairly good indicator of the quality of your overall verification, the main problem is that they are only as good as the cover groups or cover properties you apply. And these in turn are only as good as the functional coverage specification – and that itself depends on the quality of your overarching specifications and requirements.

Moreover, the way this analysis is done is by identifying if it was possible for a design feature to be exercised at least once, or if a certain combination of events was quantifiably reachable by a stimulus many times. This does not establish whether the design is guaranteed to have a certain behavior, as this analysis is done on top of simulation testbenches. By itself, it provides no guarantees.

Of course, if formal verification has been used, then there are stronger guarantees that can be made on the quality of checks, the detection of over-constraints, and even the identification of corner case bugs through coverage.

Summary

I began this series by identifying a verification crisis. The challenge lies in how we mitigate and manage it. We can go a long way toward achieving those goals using the verification meta model and identifying and deploying the value in the main characteristics of its four main pillars – Requirements and Specifications, Verification Strategy and Plan, Debug, and Signoff & Review.

A common theme is that good processes, clear plans for training and strong methodological development can make a difference. Together, they will help you develop an efficient DV flow that can be deployed consistently in engineering teams by DV engineers.

As we look ahead to 2018, I challenge you to make it a New Year’s resolution to examine your verification meta model and look for ways to improve. Your team will thank you, and so will your customers.

Now that you have read all three parts of this series, how are you feeling about your team’s verification meta model? Has this series helped you identify any areas that need improvement? And did I miss anything that should be considered when approaching a DV flow? Sound off in the comments below or share your thoughts with me on Twitter (@AshishDarbari).

Comments are closed.

PLATINUM SPONSORS

Synopsys Cadence Design Systems Siemens EDA
View All Sponsors