Focusing coverage for system-level integration

By Chris Edwards |  No Comments  |  Posted: July 8, 2014
Topics/Categories: Blog - EDA  |  Tags: , , ,  | Organizations:

Getting effective coverage for a large, complex SoC is one of the biggest issues in verification right now, leading some teams to consider early tapeout to get a platform on which they can run effective system-level vectors and even applications. Many of the errors caught using taped-out silicon tend to be complex integration bugs where the interconnect suddenly fails to behave as it should. An archived webinar by Cadence Design Systems shows an alternative approach – using hardware acceleration to extend RTL verification techniques to the system level.

“Engineers and managers are gradually losing visibility into verification,” said Raj Mathur, director of product marketing at Cadence during the webinar. “When packets are being dropped, the conditions that cause them to be dropped may be missed for thousands or millions of cycles and the test bench may not recognize the problem until weeks have gone by.”

The approach described by Mathur and Eric Melancon, staff product engineer, focuses on the combination of transaction-based modeling in combination with hardware acceleration, allowing the use of live hardware interfaces to help create real-world conditions for verification. The key, explained Mathur, is to focus the SoC-level effort on conditions that affect integration rather than the internal functions of individual blocks, although assertions continue to be used on that logic to ensure continued correctness as errors are rectified.

“At the system or SoC level, you can begin to expand coverage by adding live interfaces while you continue to use assertions and coverage techniques. As users apply coverage, they typically ask themselves certain questions as they attempt to achieve 100 per cent coverage.” Mathur explained.

“From a verification perspective, the block level is not the same as the subsystem or system level,” Melancon added. “You are trying to answer different questions. At the system level you are looking at interactions between subsystems, such as ‘were two units active simultaneously’ or ‘did I receive an interrupt when the CPU transferred data to the GPU?'”

System-level focus

The key is to focus coverage on the areas that matter, said Melancon, using techniques such as hierarchy management to isolate signals that have an effect at the chip level and take the focus away from those that will have been verified at the block level.

“Another technique is to use deeper analysis in particular regions of the design. You may have a CPU, some unique functionality in the core and a bunch of peripherals,” Melancon said. “Let’s say the CPU is licensed and the peripherals are being reused. Some of the core functions may be new – they are the meat of this SoC design. And let’s say there is a new, complex interconnect being used. It can be useful to focus on these less-well tested parts of the design.

“Many users do still take on the task of analyzing coverage data deep into the design and set very high coverage goals – close to 100 per cent. Any parts that don’t meet that, they will have engineers review the holes in those modules. When you do that deep analysis, invariably you will find coverage items that will not be used in the context of the design.”

To avoid spending too much time on trying to build coverage for those deeply buried sections, coverage tools can be applied that support the notion of exclusions, said Melancon. “Say a module supports 8, 16, and 32bit accesses. If this usage of the module supports 32bit only, the 8 and 16bit parts can remain uncovered. Taking account of that increased block coverage from 83 to 89 per cent in one example. Tools can store this so you only have to do the review once.”

Coverage for optimization

“Once you have items captured,” Melancon added. “Then you can define and map coverage items into that plan. The plan gives you the ability to organize coverage by feature, rather than having to analyze flat coverage data. So you can easily measure progress and track coverage.

“We are also finding opportunities to use coverage for optimization,” said Melancon, pointing to the use of software during hardware-accelerated verification to track down potential bottlenecks. “Looking at coverage on a FIFO, if you see that FIFO usage is low, maybe you can reduce its size. Or if it’s unexpectedly high, maybe expand the size of the FIFO or perform software changes to make better use of the FIFO. Covergroups and properties provide easy ways to collect that data so you don’t have to put your own counters into the design. We have seen some users who use code coverage to give themselves feedback on levels of activity into different parts of the design.”

The Cadence webinar archive is available here.

Comments are closed.

PLATINUM SPONSORS

Synopsys Cadence Design Systems Siemens EDA
View All Sponsors