Design companies have continued to buy functional verification tools through the recent downturn and the prediction is that verification spending will continue to rise. While this is good news for EDA companies, it is also an indicator of the industry’s challenge in containing the verification problem as design complexity continues to rise in terms of the number of transistors and the system-level functionality on a chip.
The common experience is that newer chips have additional failure modes that were not issues before. For example, approximately 85% of the designs today contain more than one clock domain. This is necessitated by a combination of clock-skew considerations as well as the diverse clocking requirements of system-level components on a chip. As a result, chip failures arising from improperly designed clock-domain crossings have become increasingly common.
The repercussions of clock-crossing errors are going to be very visible in large SOCs with numerous asynchronous interfaces and expensive tapeouts.. These new failure modes also impact productivity and ROI on much smaller designs implemented with FPGAs. In other words, the impact is universal across chip sizes, styles and application domains. One of our first customers was, in fact, an FPGA design house. They began using specialized clock-domain verification after discovering numerous interface-related failures in the field and further discovering that these failures were very difficult to reproduce, even in a lab environment. They said that the manifestation of some of the failure symptoms depended on the specific batch of FPGAs received from the fab.
Similarly, low-power design techniques such as clock gating and Vdd gating are being used much more widely now, creating failure modes that did not exist in previous chip generations. As power states change dynamically at the block level, ensuring sure that the functionality of these and neighboring blocks is not adversely affected has become a significant verification obligation best addressed with domain-specific tools. The addition of DFT structures, the emerging need to plan, integrate and bless timing constraints at the RTL level, and the presence of X-sources during simulation have also created new verification obligations and driven the need for domain-specific tools.
While some of these obligations are clearly sign-off level (for example clock-crossing verification), even those that are not sign-off level create enough of a productivity overhead that focused tools addressing them create substantial value in the verification process.
This is leading us to a new paradigm of verification by parts rather than the incumbent, almost monolithic, process of simulation and static timing analysis. While simulation has served the industry reasonably well thus far, its viability as a mainstay of the verification flow is being marginalized by the sheer complexity of checking for the newer failure modes. For example, using simulation or static timing analysis alone to check clock-domain crossings does not make sense, given that these failures arise as a result of corner case combinations of timing and functionality. Similarly, much of the correctness specification is implicit in the power, DFT, timing constraints and X-verification domains. Extracting that specification and applying static techniques in these domains in a focused manner leads to a much more efficient verification and debug process.
Simulation is still important in the flow, but is best used in a manner that complements these domain-specific tools rather as a one-stop verification hammer. Links can be built from the domain-specific tools into simulation in a manner that uses simulation cycles more effectively. At a minimum, the tools must use industry-standard interfaces, and require a minimum of set-up and scripting to be adopted into verification flows.
Domain-specific verification tools built around customized static techniques based on synergetic integration of structural and formal analysis are the most effective. Structural analysis finds errors early, and carves out well-scoped formal analysis problems. A very familiar success story of applying specialized techniques to solve a narrow but key problem has been the wide and easy adoption of formal equivalency checking between RTL and gate-level representations.
Another requirement is the effective orchestration of multiple verification strategies. As the design manager and verification team work on the design it is their expert skill and judgment that enables the project to progress in the face of rapidly changing conditions. The availability of these razor-sharp technologies targeting specific failure modes allows verification to be approached in a surgical manner with consequent improvements in design quality, productivity and return on investment. Even the best surgeon needs the right tools to be effective!
Dr Pranav Ashar is chief technology officer of Real Intent and brings two decades of EDA expertise to the company. He previously worked at NEC Labs (Princeton, NJ) developing formal verification technologies for VLSI design. He authored about 70 papers and co-authored a book entitled ‘Sequential Logic Synthesis’. He has 35 patents granted and pending, many of which have been licensed or part of business enablement. He holds a PhD in EECS from UC Berkeley.