Formal fundamentals: what’s hiding behind your constraints

By Sean Safarpour |  No Comments  |  Posted: July 17, 2018
Topics/Categories: EDA - Verification  |  Tags: , , ,

Sean Safarpour is the Director of Application Engineering at Synopsys.Sean Safarpour is the Director of Application Engineering at Synopsys, where his team of specialists support the development and deployment of products such as VC Formal, Hector and Assertion IPs. Prior to Synopsys, Safarpour was Director of R&D at Atrenta focused on new technology, and VP of Engineering and CTO at Vennsa Technologies, a start-up focused on automated root-cause analysis using formal techniques.

Countless articles, blogs and books discuss formal verification fundamentals such as constraints. When starting out with formal verification, these topics are explained, discussed and mastered fairly quickly. However, as one delves into the details of verification and starts dealing with failures, debug and convergence challenges, it is easy to forget some of the fundamentals. That’s what had happened in a recent encounter with a customer.

We were talking with the customer about using formal verification more broadly in their organization. It was the kind of intellectually stimulating meeting where we were going deep, whiteboarding, and challenging each other, with the common goal of improving the verification process and methodologies. In the middle of the discussion, one of the users said:

“The problem was too hard for formal… I was not getting enough proofs, so I added some constraints, got all proofs and moved on….”

I had to stop the customer right there and ask what was the intention behind adding the constraints. The customer was looking to get better convergence (i.e. more proofs), by adding assumptions to the setup to turn many of the inconclusive or bounded properties into full proofs.

Seeing a long list of properties with a green check mark next to them, showing they were proven, made the customer feel good and they moved on, assuming that formal verification had just confirmed that there were no bugs. What the customer actually did was very dangerous, and we later had a good conversation about why.

Let’s take a step back. The reason we need constraints or assumptions in the first place is to limit the behavior/functionality that the formal tool analyzes to the design’s “legal” states/subset. We don’t want to be alerted to failing waveforms or counter-examples if they are caused by input combinations that cannot happen in reality. So we add constraints to tell the tool to avoid analyzing certain behaviors.

One consequence of adding constraints can be to limit the exploration space for formal tools and thus improve the tool’s performance and/or the design’s convergence. This benefit comes at the cost of ignoring/avoiding some potentially legal (and even possibly buggy) state spaces. We call this approach over-constraining the design, and it is one of the most dangerous aspects of formal verification because it can deliver what look like full proofs, despite the fact that the design has not been fully analyzed.

Let’s say we have a design space that is the area of a square: Length x Width. If Length and Width are large (let’s say 100) and we want to look inside every single one of its 10,000 sub-squares to ensure it is free of bugs, that’s a large space to cover. However, if a constraint is added to say we should only analyze sub-squares placed where Length < 10 and Width < 10, then the state space to explore is much smaller (just 100 sub-squares) and can be analyzed more quickly. Importantly, though, the analysis has not addressed potential bugs in sub-squares where Length or Width >= 10.

This kind of over-constraint is a common part of formal verification. Even when the intent is correct, it is still possible to over-constrain the design analysis accidentally. Luckily, VC Formal has very strong over-constraint analysis features that can be used to catch such situations. (There’s a good webinar on this here.)

Some advanced users do over-constrain a design on purpose, to get more proofs or deeper bounds, but they do it in a systematic way, commonly referred to as case-splitting, in which a problem is broken down into multiple sub-problems which are each exhaustively verified. Advanced users also do an over-constraint analysis (for which there are many techniques) to ensure that they are not creating the conditions for bad surprises.

While we would all love to get better convergence for our designs, we have to stay vigilant and ensure that we don’t get there by over-constraining our analysis.

Further information

For a related blog, check out Goldilocks and the three constraints.

Author

Sean Safarpour is the Director of Application Engineering at Synopsys, where his team of specialists support the development and deployment of products such as VC Formal, Hector and Assertion IPs. He works closely with customers and R&D to solve their current verification challenges as well as to define and realize the next generation of formal applications. Prior to Synopsys, Safarpour was Director of R&D at Atrenta focused on new technology, and VP of Engineering and CTO at Vennsa Technologies, a start-up focused on automated root-cause analysis using formal techniques. Sean received his PhD from the University of Toronto where he completed his thesis entitled “Formal Methods in Automated Design Debugging”.

Company info

Synopsys Corporate Headquarters
690 East Middlefield Road
Mountain View, CA 94043
(650) 584-5000
(800) 541-7737
 www.synopsys.com

Comments are closed.

PLATINUM SPONSORS

Synopsys Cadence Design Systems Siemens EDA
View All Sponsors