Doc Formal: The evolution of formal verification – Part One

By Ashish Darbari |  2 Comments  |  Posted: July 21, 2017
Topics/Categories: EDA - Verification  |  Tags: , , , ,  | Organizations:

Ashish Darbari is CEO of formal verification consultancy Axiomise.Ashish Darbari is CEO of formal verification consultancy and training provider Axiomise.

We celebrate innovation and creativity in the way we cherish fortresses, castles, and other monuments built throughout history. We have always been infatuated with architecture, with the design of the finished structures, even with the process itself, but not with how these buildings were tested. Many books describe how amazing landmarks were built and explain their beauty, but you are unlikely to find much about how they were examined for quality and rigor.

Obviously, tests were carried out: That we still can visit Mayan temples, Egyptian pyramids, and Roman amphitheaters testifies to the workmanship and thoughtful construction methods employed. But let’s be honest. Test has never been regarded with the same enthusiasm as design and architecture. It is little surprise that much the same applies in the semiconductor industry. Design takes the lion’s share of the glory; verification is largely left out of the limelight.

Doc Formal wants to encourage you to give test a closer look. It might not be flashy, but it is essential. It makes the cool stuff possible.

In this two-part feature, I will first give you a brief history of how formal techniques have evolved – and illustrate their solid and long-standing foundations – and then move on to give a user-level view of their application today.

Change, constant and continuous

Much has changed since transistors were invented in 1947, but evolution has not been uniform across our business.

In terms of design, architecture, and computing we have gone from room-sized mainframes in the 1960s to today’s pocket-sized mobile compute machines running at gigahertz with tons of on-chip memory. Our interactions with these machines continue to redefine human existence.

By contrast, test and verification have not scaled in the same manner. The principles remain the same; the basics of simulation-based testing have not changed. Test’s three main components – stimuli, observers/scoreboards, and checkers – still have their roots in test’s early forms.

The way in which a stimulus is generated may have changed, but the role of a stimulus generator is still one of the tentpoles for the constrained random or directed form of dynamic simulation used to test billion-gate designs. The role of the observer has changed: Where once a human read an oscilloscope, now a machine records the traces of simulation automatically or in a programmed manner. But whether an issue in a design-under-test (DUT) is found still frequently comes down to a comparison with a reference model, a process that again used to be done manually.

One thing that has changed significantly for semiconductors compared to other engineering disciplines, however, is the rate at which complexity has increased in design and architecture. We must also account for aggressively shrinking times-to-market. These factors have exacerbated the challenges facing test and verification, thrusting it into the spotlight for the first time and for one overarching reason: Verification now accounts for nearly 70% of overall design cost.

Autonomous cars and applications for the Internet of Things (IoT) are driving innovation and growth in hardware and software design. The complex array of requirements needed for these markets can be distilled down to two key leading requirements: Safety and security.

Functional safety is paramount in the automotive industry, avionics, and railways. At the Verification Futures Conference in April 2017, it was obvious that even the marine industry is alert to increasing demands for safety certification, and an array of standards – e.g., ISO 26262, DO-254, DO-178, and DO-333 – is raising the bar in terms of compliance. Meanwhile, security is fast becoming a nightmare for engineers across all disciplines involved in delivering trustworthy systems for mobile computing and access control.

Dynamic simulation is the default verification technique for hardware and software. It has long formed the backbone of test and verification. However, despite the consolidation of once-diverse methodologies (e.g., OVM, VMM and AVM) into UVM, engineers are some way from its efficient deployment for speedy verification and, more importantly, achieving the quality required to ensure key safety and security goals are met in the underlying systems.

There is one exception. This is the recent advent of the portable stimulus where requirements-based testing is driven from architectural models of subsystems and SoCs. Even that, unfortunately, has ‘stimulus’ in the name rather than ‘requirements’. Nevertheless, the overall framework appears to be promising.

That aside, though, we still have some way to go.

The foundations of formal verification

As conventional simulation-based testing has increasingly struggled to cope with design complexity, somewhere in parallel, strategies centered around formal verification methods have quietly evolvied.

Edsger Wybe Dijkstra famously coined the phrase, “Testing shows the presence, not the absence of bugs.” In April 1970, he challenged the design community to think differently. Even though the remark was made in the context of software verification, Dijkstra’s call to action has had much wider influence. Nor was Dijkstra alone on this train of thought. This quiet revolution in the USA and the UK can be traced back to the 1950s. In 1954, Martin Davis developed the first computer-generated mathematical proof for a theorem for a decidable fragment of first order logic, called Pressburger arithmetic. The actual theorem was that the product of two even numbers is even.

Key steps in theorem proving

In the late 1960s, first-order theorem provers were applied to the problem of verifying the correctness of computer programs written in languages such as Pascal, Ada, and Java. Notable among such early verification systems was the Stanford Pascal Verifier. It was developed by David Luckham at Stanford University and was based on the Stanford Resolution Prover developed using J.A. Robinson’s Resolution Principle.

In Edinburgh, Scotland in 1972, Robert S. Boyer and J. Strother Moore built the first successful machine-based prover, Nqthm.It became the basis for ACL2, which could prove mathematical theorems automatically for logic based on a dialect of Pure Lisp. Almost at the same time, Sir Robin Milner built the original LCF system for proof checking at Stanford University. Descendants of LCF make up a thriving family of theorem provers, the most famous ones being HOL 4, built by Mike Gordon and Tom Melham; HOL Light, built by John Harrison; and Coq, built at INRIA in France drawing on original work done by Gérard Huet and Thierry Coquand.

Both ACL2 and the various provers such as HOL 4 and HOL Light have been used extensively for digital design verification of floating point units—most notably, AMD and Intel. John Harrison, Josef Urban, and Freek Wiedijk have written a useful article covering the history of interactive theorem proving.

Another notable theorem prover that is not considered part of the LCF family is PVS. It was developed by Sam Owre, John Rushby, and Natrajan Shankar at SRI International and has been used extensively during the verification of safety-critical systems, especially on space-related work at NASA.

Theorem provers have long proved valuable and were once seen as the formal ‘tools of choice’. But they had a major shortcoming. If they couldn’t prove a theorem they couldn’t say why. There was no way of generating a counter-example or any form of explanation as to why a conjecture could not be a lemma. This limited their applicability to formal experts with solid foundations in computer science and mathematics. So much so, that many engineers used to quip, “You need a PhD in formal just to use formal tools.”

Model checking

While theorem proving was gaining attention for proving the properties of infinite systems, it clearly had shortcomings. These began to be addressed by a number of people during the 1970s. Particularly notably in this group were Allen Emerson and Edmund Clarke at Carnegie Mellon University in the USA, and J.P. Quielle and Joseph Sifakis in France. Also, in the late 1970s and early 1980s, Amir Pnueli, Owicki, and Lamport began work toward creating a language to capture the properties of concurrent programs using temporal logic.

In 1981, Emerson and Clarke combined the state-space exploration approach with temporal logic in a way that provided the first automated model checking algorithm. It was fast, capable of proving properties of programs, and, more importantly, could provide counter-examples when it could not prove. Moreover, it happily coped with partial specifications.

During the mid-1980s, several papers were written showing how model checking could be applied to hardware verification and the user base for formal began to grow. Soon, however, a new challenge emerged: the size of the hardware designs that could be verified with model checking was being limited because of the explicit state-space reachability.

Around this time, Randall Bryant, from CMUs electrical engineering department, began playing with the idea of circuit-level simulation mainly for logic simulation for transistors. Specifically he considered using the idea of a three-valued simulation.

Bryant explored the use of symbolic encoding in compressing simulation test vectors. The biggest challenge was an efficient encoding of these symbols and the Boolean formulas made on them. In response, Bryant invented ordered Binary decision diagrams (OBDDs). Ken McMillan, a graduate student working with Edmund Clarke on his PhD, got to know of this and in 1993, symbolic model checking was born.

OBDDs provide a canonical form for Boolean formulas that is often substantially more compact than conjunctive or disjunctive normal form. Also, very efficient algorithms have been developed for manipulating them. To quote Ed Clarke from this paper: “Because the symbolic representation captures some of the regularity in the state space determined by circuits and protocols, it is possible to verify systems with an extremely large number of states—many orders of magnitude larger than could be handled by the explicit-state algorithms.”

Together with Carl J. Seger, Bryant went on to invent another very powerful model checking technique called symbolic trajectory evaluation (STE). It has been used extensively for processor verification at companies such as Intel for more than 20 years. Several generations of Pentium floating point units have been verified using STE. Intel continues to use STE to today.

The main benefit that STE provides over conventional model checking is that it employs a three-value logic (0,1,X) with partial order over a lattice to carry out symbolic simulation over finite time periods. The use of ‘X‘ to denote an unknown state in the design and perform Boolean operations on ‘X’ provided an automatic fast data abstraction. When coupled with BDDs, this provided a very fast algorithm for finite-state model checking. A quick primer on STE is available here. The main disadvantage of STE is that it is unable to specify properties over unbounded time periods. This limited its applicability to verifying digital designs for finite bounded time behavior.

Other prominent developments in formal technology still in use that include model checking and some form of theorem proving for verification of concurrent systems include:

 

However, all of these languages and supported formal verification tools have primarily been applied to software or the high-level modeling of systems, and less so to hardware. Using them requires a substantial background in mathematics.

Equivalence checking

Both theorem proving and model checking continue to be used for different reasons on both hardware and software, but a third form of formal verification is also now being used extensively. That form is equivalence checking.

This method relies on comparing two models of a design and produces an outcome that either proves they are equal or provides a counter-example to show when they disagree. Its early forms of this were done for combinational hardware designs, but scalable equivalence checkers now exist for sequential equivalence checking. These are used widely, most notably for combinational equivalence checking of an RTL and a netlist, but also for sequential netlist synthesis verification. It is now common practice for hardware designers to use equivalence checkers to compare that an unoptimized digital design is functionally the same as an optimized one where the optimization may have applied power-saving features such as clock gating.

This quick tour should give you a good sense of how formal verification techniques have steadily evolved, rising to each challenge before them. This is an established approach now coming into its own because of how it helps us confront increasing complexity. Formal verification also has, to take us back to the beginning, an elegance to it that is perhaps not as fully recognized as that in the designs it enables.

But in our next part, it is time to move on to look at how formal verification can be applied, how it can make those lauded designs a possibility. We will do this by taking ‘A practitioner’s view of formal’ with specific examples built around the technology’s use in conjunction with widely-adopted System Verilog Assertions.

 

Comments are closed.

PLATINUM SPONSORS

Synopsys Cadence Design Systems Siemens EDA
View All Sponsors