Axiomise launched its formal verification training program at the same time as the recent Verification Futures 2018 conference in the UK, where I set out a new vision for formal. Afterwards, I was contacted by a number of seasoned simulation experts interested in formal verification. Quite a few asked me not only to explain how formal differs from simulation-based verification but also why one should make the investment in learning and deploying it.
I quickly realized that a number of the things I consider plainly obvious advantages of formal are not that obvious to the majority of verification engineers who use flows based around simulation, directed test, and constrained random. I also discovered that many of these engineers’ willingness to invest is framed by significant, recent investments in emulation.
Interestingly enough, Tech Design Forum has an article specifically making ‘The budget case for emulation’. So, let me follow a similar tack. In this article, I will argue why formal verification should play an important part in your overall verification plan and, importantly, why your investment in formal verification tools, training and methodology development will not increase your costs, but shrink them.
We need to understand what are the different verification-related costs in a typical SoC design verification program?
In my view, cost is not just the expenditure incurred adopting a new verification technology in its early days by buying tools and training. It should also should be measured by how long the technology takes to use in practice, what kind of bugs it helps to catch, how it affects debug, and finally how it contributes to signing off verification with confidence.
Now, if you factor in the cost of missed bugs as well, the discussion becomes very interesting. With all that in mind, these are my ten main costs points, along with the reasons why you should adopt formal verification to control them successfully.
1. Tools and support
How much do are the tools cost?
How easy is it to use them?
What is the support cost if one ever needs it?
What is the cost of maintaining the tools, their different patches and so on?
What is required to ensure consistency in their use across the organization?
While it is true that formal tools cost more than their simulation counterparts, they also cost a lot less than emulators. The initial cost of investment in formal tools quickly gets repaid when trained engineers deploying good methodologies start finding bugs earlier in the design cycle, and are able to exhaustively prove the conclusive absence of bugs. This is not possible with either simulation or emulation.
Using multiple simulator licenses to achieve better throughput for test-rate can be offset against a single formal tool license capable of finding the same issues and even more due to the way formal technology works. Whereas constrained simulation takes a random approach requiring a significant investment in running the same test harness with different seeds in parallel, formal can explore the state space exhaustively without requiring any stimulus or seed. It can perform breadth-first search systematically, often finding corner case bugs much more quickly.
How much does it cost for each employee?
How long does it take?
When will it start to pay off?
Most good training programs ensure that the student is ready to work on an appropriate project, and scope out what can be done with the trained resource over staggered periods of six weeks, one quarter, and six months.
If you are not seeing any return on your investment after six months, the training probably has not been deployed properly and you must look at the reasons why. If you have derived enough quality results from formal in six months, you should have already recovered your initial investment, so from this point onwards the ROI will simply continue to grow.
But let’s be honest. A good training methodology has been lacking. This is one reason why formal’s adoption has been a mixed bag. However, if you can get good training (such as what we offer at Axiomise) that is totally methodology oriented, modular, customizable and tool-agnostic, you will get results. Also, as a benchmark, the cost of a high quality training program for formal should typically cost less than you would pay for a UVM training course.
Has it been developed thoughtfully?
How is it promoted and adopted across the organization?
Can every vendor’s tools be applied within it?
An investment needs to be made in effort, time and money to ensure that any methodology is adopted properly and consistently within an organization. The methodology should consider the overall scope of verification and should look to solve the real problems in verification not just focus on exploiting automated features within your tools.
Indeed, your methodology should be completely vendor-agnostic and your training providers should provide instruction that works seamlessly across all tools.
4. Tool deployment
What does each license cost?
What is the tool’s runtime?
What is the cost of the required compute cluster?
From my experience, if a well-trained engineering team is using its tools properly, they not only find more bugs but also optimize the runtimes of compute clusters effectively. This in turn means they require fewer licenses.
So much is certainly valid in the case of formal verification where good methodologies require a small set of tool licenses and the methodologies provide a quicker turnaround on proof results yielding an optimal use of the compute cluster.
I recently described in one of my blogs how I regularly undertake formal verification tasks and obtain a high rate of proof convergence on a tablet PC with a mere 8GB of RAM and a single CPU core with just one license. The example in that article was of a serial buffer verification where exhaustive verification of buffers with 1040,000 states can result in finding bugs in less than 10 minutes!
What is the most efficient strategy?
Formal verification is renowned to produce debug traces that are much shorter than would result from a simulation or emulation test. If one was to measure how much it costs to debug an SoC design with formal, simulation and emulation in terms of effort, time and the cost of the engineering resource (the job grade of an engineer for example), you would likely get interesting results showing that formal outperforms both simulation and emulation in debug.
I recall on one project I was asked to investigate a bug in an SoC controller where one of the performance counters were not behaving properly. This error was flagged by a highly competent verification engineer who had spent about three weeks debugging the emulation trace to find this dodgy counter. Using some powerful techniques, I reproduced the bug in this 64-bit counter in formal with one minute of runtime. What was even better was that the runtime remained fixed as the counter size increased and even for a 512-bit counter the time it took to find any kind of counter bug was fixed at a minute of runtime. A 64-bit counter has nearly 18.4 quintillion states, and if it takes a minute to find a bug, I’ll take that.
6. Bug fix and retest
How easy is it to fix a bug?
How easy is it to test that a fix works and that it has not broken anything else in the design?
Fixing one bug and thereby causing another: It’s a long-standing problem. Formal verification is ideal for this kind of work. Its exhaustive nature allows the verification team to establish conclusively that a bug fix was safe.
What time, effort and engineering resources are required to be sure that signoff is complete?
Understanding what it costs to sign off verification is a topic in itself, but we can get a high level idea by looking at those factors above and how they contribute to a coverage-based signoff infrastructure.
Even formal verification has a cost and is not entirely free like stimulus generation (which is free with formal). However, if the right coverage techniques are deployed, formal verification signoff becomes a lot easier. Certain tools address signoff in a way that saves engineers an enormous amount of time. As I write this article, I am keenly aware that new emerging solutions in this space are being delivered and this will surely make signoff with formal easier still.
The important thing to note is that when one is able to prove a property exhaustively with formal; establish at the same time that free formal stimulus was not blocked by user-defined constraints; and know that subsequent mutations to the design can be caught with one or more checkers – this is the best possible outcome one can imagine for any verification technique.
8. Late and missed bugs
What would it cost if a bug were found two weeks before tapeout?
What would it cost to patch a broken chip if a bug were to escape into the final product?
Or putting these risks in another context, ‘What if we could have found the same bug in the first hour of design bring-up?’
We still do not really know how much it has cost to fix Meltdown and Spectre. Using formal verification early in design cycle and maintaining systematic deployment across all of the IP development cycle can minimize the occurrence of late bugs and even obliterate the missed bugs phenomena.
What options are there to reuse the verification infrastructure?
You want to minimize the costs related to your engineering resources and shrink your time-to-market. Architecting end-to-end formal verification testbenches does require an investment of time and for big designs involves a dedicated deployment of engineering resources. But the good news is that if this effort is properly planned for based on good training and a good methodology, it is very likely that the testbench can be reused.
Where formal verification wins over simulation is that formal does not require any investment in stimulus generation. You get that for free. And it can be deployed across the I/O boundary of the IP as well as on internal finite state-machines for detecting and proving exhaustively that the designs are free from deadlocks and livelocks.
10. Engineering resources
How do you maximize productivity across your team?
If a junior engineer finds most bugs in a project surely that is more cost effective than having a principal engineer do so.
No offense to the principal engineers out there – I was one myself. But I remember from my experiences at several major companies that the best resources for formal verification are often the youngest hires – provided they have received adequate training.
I can think of several star performers who went from initial training to production-grade work in weeks and have continued to use formal verification ever since. The reason why formal verification is easier to learn than constrained random is because all it takes to become productive is some background in digital design, familiarity with some high-level design languages and a problem-solving mindset focussed on requirements as the basis for verification. You do not need to be good at understanding polymorphism, object-oriented programming or software engineering.
So, time to invest in formal?
I urge you to consider your verification choices carefully. History is an important asset, so analyze your previous project data. it should reveal what worked for you without formal and what did not. If you used formal, and it did not work, do you know why? Was part of the problem down to a lack of good training and/or methodology?
Do you need formal verification? You know my answer, what’s yours?
Perhaps, you may agree with some of my reasons to use formal, perhaps you don’t. In any case, get in touch and share your thoughts.
You can follow Doc Formal on Twitter at @ashishdarbari