Doc Formal: The crisis of confidence facing verification II

By Ashish Darbari |  2 Comments  |  Posted: November 29, 2017
Topics/Categories: EDA - Verification  |  Tags: , , , , , , ,  | Organizations:

Ashish Darbari is CEO of formal verification consultancy Axiomise.Ashish Darbari is CEO of formal verification consultancy and training provider Axiomise.

How to optimize the verification meta model

In the first part of this series (available here), I described the need for good verification meta models and the qualities that define them. Those are the right combination of verification technologies and methodologies deployed in the correct order, by a trained workforce of skillful design verification (DV) engineers, with the goal of minimizing risk by hunting down bugs before they reach silicon.

The key requirement within the meta model is that it outlines a methodology that describes an efficient DV flow to catch bugs as early as possible (shift-left) and gives the user certainty that desired quality levels will be met on schedule. In simple terms: Quality must not compromise productivity, and equally, productivity must not compromise quality.

In the rest of the series and starting here, I look at the main components of such a DV flow, and offer some suggestions as to how they – and beyond that the meta model itself – should be realized. These components are:

Design verification flow

Requirements and Specifications

1. Do you have good specifications? Also, have they been reviewed for consistency?

In many high-pressure engineering environments, good specifications do not exist. Implementation often takes priority over documentation. I’m not a big fan of this. However we can, we must strive for good specifications and review existing ones for consistency.

A great way of resolving specification ambiguities is to describe block interfaces systematically using formal properties in System Verilog Assertions (SVA) or the Property Specification Language (PSL). Such descriptions have the benefit that their properties can be executed in formal tools as well as in simulation and emulation.

By executing the descriptions in formal tools, the DV engineer can then start gathering results through failing properties or exhaustive proofs without having to write input stimuli. If the failures are spurious – meaning that illegal stimuli have been driven into the design – the DV engineer can constrain the descriptions.

This process should open up a dialog between the DV engineer and the designer where they jointly explore detailed specification attributes on inputs of a block, which, when clarified, become the checks on the output of the driving block. Thus, the process of defining clear interfaces on the outputs of one block naturally helps to define the interfaces for the input-driving block.

If the process is followed diligently, you gets nicely specified interfaces for the entire design and a much clearer specification. The process of executing these properties finds bugs or even helps avoid bugs being introduced in the first place, as designers get into the habit of thinking more formally through the dialog process.

Similarly, as they define interfaces, designers and DV engineers get used to writing formal properties. This experience then leads to their implementation for internal whitebox checks on the design block, thereby exposing further bugs.

2. Are you constantly updating the specifications?

When specifications are formally specified on the interfaces, then the process of executing them daily on all design blocks means that if any design changes take place, the regressions on formal properties are likely to fail. This highlights the gap between implementation and specification updates.

3. Are your specification updates communicated clearly to the DV engineer?

There are several ways of establishing communication between designers and DV engineers: email, code check-ins, face-to-face meetings, whiteboard chats, or dedicated meetings to review the status of verification. All are useful to some degree.

The one I like most is the status meeting, preferably conducted face-to-face. Any process by which both designers and verification engineers become used to writing interface specifications in a disciplined manner makes it much easier to organize such meetings and, equally important, the follow-up process.

4. If you do not have time for good specifications, do you have an alternative?

When projects cannot devote any time to a disciplined interface specification process, you should still have some kind of alternative mechanism.

It could be a halfway house where you describe only part of the specifications, defining and focusing on key verification concerns. The problem with this approach is that the functional coverage process likely to be needed later for sign-off will still require a good specification. Still, such a halfway house is decidedly better than nothing!

Verification Strategy and Plan

1. Do you have a verification strategy document for review by architects, designers, and DV leads?

I often come across verification projects where the verification strategy has not been clearly scoped out.

In such instances, you will, for example, sometimes find that the DV engineer is running a legacy test bench originally written by someone else. So you have one engineer is responsible for running tests, collecting coverage, and reporting back to the verification lead, but he may not have even been asked to consider the details of why he is doing what he is doing.

Is there a defined strategy? How do we know that this test bench is adequate for exposing bugs in what could be a newer version of a design?

By following a process where a verification strategy document is an absolute requirement for every project, the engineer can refer back to why a legacy test bench was originally built. In turn, this will encourage a dialog between designers and architects on what new issues may now be of significance and need to be considered when reusing the old test bench.

For me, this is a no-brainer: Reviewing a three- to four-page strategy document is much easier than reviewing several thousand lines of test bench code to understand the adequacy of the test bench. Having a verification strategy document is low-hanging fruit that can add enormous value.

2 Do your verification strategy document and plan have a Plan B?

Things often don’t go according to plan. Having a Plan B can only be a good thing. Consider a case where a new block is being verified and the plan is to use constrained random as the main technology. Let’s say that the block is so complex that it is conceivable that not all corner case bugs will be flushed out by constrained random. Is there a backup plan to find some of these with an alternative technology?

Formal may be great at finding thos corner case bugs, especially deadlocks; so, Plan B may be to apply it to those parts of the design where the risks are highest.

But the converse can be true. Where formal is the main technology but is expected to deliver some proofs that are inconclusive, an equally good Plan B would be to use directed test. Directed would then focus verification on those parts of the design for which formal’s proofs are inconclusive.

3. Has your DV engineer used all the available technology— especially formal—to ensure that designs are brought up free from simple bugs?

This is an important point. A lot of bugs that are simple to catch with formal are only picked up late in verification if the strategy is more dependent on dynamic simulation or even emulation. This leads to long debug times and even longer fixes.

It is also the case that some bugs are very difficult to catch with simulation or emulation (e.g., bugs in FSMs, deadlocks, dead code, or even redundant code.

Linting, sometimes called ‘autochecks’ in formal tools, can catch out-of-bounds violations automatically and easily.

This point illustrates the importance of leveraging an appropriate combination of verification technologies.

4. Is your mix of verification technologies suitable for the project?

I often come across projects where, based on previous experience, the team shows a very strong preference for and reliance upon a particular verification technology, such as UVM for SoC verification. But while UVM is a powerful paradigm, it may not be ideal for full SoC test.

Alternative techniques – directed test, portable stimulus, or even formal – can also be strong candidates for consideration.

And again, this cuts two ways. While directed test makes sense for carrying out read and write checks from masters to slaves in the SoC, a more well-rounded test program in the presence of multiple masters might want to include some stalling and randomization, something intrinsic to constrained random technology (i.e., UVM).

Further, if you have a complex bus fabric in the SoC, neither directed test nor constrained random is a good choice; in fact, the use of formal would be ideal to expose corner case bugs. After all, if the bus fabric deadlocks, so will the entire SoC.

Meanwhile, although all of this discussion makes sense in the context of an SoC verification, the options you should consider will necessarily be different for IP-level verification.

When it comes to verification technologies, one size most certainly does not fit all—as with clothing, tailored is better, and bespoke is best.

I hope that’s given you something to think about as you consider your team’s current verification situation. In Part Three, I extend the discussion of the meta model into the remaining two aspects of a good DV flow: Debug and Sign-off & Review.

How does your team’s verification plan measure up to what I have begun to outline here? Have you identified any areas that need improvement? Share your thoughts in the comments below or on Twitter (@AshishDarbari).

Comments are closed.

PLATINUM SPONSORS

Synopsys Cadence Design Systems Siemens EDA
View All Sponsors