Thought you had verified your SoC? You probably only did half…

By Anders Nordstrom |  1 Comment  |  Posted: January 5, 2016
Topics/Categories: EDA - Verification  |  Tags: , , ,  | Organizations:

Anders Nordstrom, senior corporate applications engineer, Verification Group, SynopsysAnders Nordstrom is a senior corporate applications engineer in Synopsys’ Verification Group working on formal methodology and features on the VC Formal tool. He has 20 years’ experience in assertion-based verification and formal property verification, in EDA and as a verification engineer.

…that is, if you’re not using formal property verification (FPV) in your verification flow.

Why do I say this? The major objective of SoC functional verification is to ensure that the design is performing its intended function correctly. This is usually done in simulation by creating lots of stimuli for the design under test, representing real use cases, and checking the responses. This continues until functional and code coverage metrics indicate that all scenarios in the verification plan have been covered and that all the significant blocks have been exercised.

However, another, often overlooked objective of SoC verification is ensuring that the design is not performing, or allowing, incorrect functionality or behaviors. These may sound like similar objectives, but they are vastly different. This second, and, arguably, equally important objective, given today’s focus on security and reliability, cannot be met with simulation-based techniques. Exhaustively validating the absence of functionality or behaviors requires the use of advanced formal verification technologies and FPV applications.

Incorrect functionality and behaviors in a SoC can be completely independent of the intended functionality, and can range from high-level unexpected behaviors such as system deadlocks or allowing reads of protected memory locations, to lower-level functions such as allowing multiple masters on a bus at the same time, or writing to a full FIFO. Incorrect functionality and behavior often impact the safety or security of a product, and hence can be very expensive – and even dangerous – to overlook.

Simulation techniques can never show that a behavior will never happen – something not happening in N clock cycles does not prove anything for N+1 clock cycles. If you try to read a protected memory location without the right permissions in simulation, and it fails, you only know that it failed once. You don’t know if it will fail after the 100th time or the 375th or at the end of the simulation. No matter how many simulations you run, you can’t say for sure that it is impossible to access the memory location.

Mathematical proof-based formal verification technologies are needed to verify that an incorrect behavior can never happen, such as those found in today’s advanced FPV tools. FPV tools allow you to write properties which precisely define the specific behaviors of interest, either intended or illegal, that you wish to verify. For example, in a FIFO, the read and write logic is supposed to ensure that you can never read from an empty FIFO or write to a full FIFO. This behavior can be expressed in SVA as:

assert property (@(posedge clk) !(full && push))
assert property (@(posedge clk) !(empty && pop))

When simulating, these conditions will be checked on every clock cycle. If they never fail it does not mean that it cannot happen, only that it has not yet been seen. In contrast, by using an FPV tool to verify the assertions, it is possible to prove that the illegal conditions can never happen, no matter how long simulations are run.

So if you are employing only simulation-based verification, and are not using FPV tools in your verification flow, chances are that you are only verifying half of your design.

Comments are closed.


Synopsys Cadence Design Systems Siemens EDA
View All Sponsors