Software-focused methods and languages are needed to address the growing gap between SoC and IP verification according to panelists in a session organized by Cadence Design Systems on smarter verification at DAC 2015 in San Francisco.
The use-case diagrams from the Unified Modeling Language (UML), originally developed for specifying software behaviour, may enable hardware and software engineers to communicate more effectively, panelists said. And adoption of the V-model development flow may streamline the generation of test vectors that work better at the SoC level than today’s IP-focused, UVM-based techniques.
Ziv Benyamini, CTO of system and software solutions at Cadence Design Systems, said: “SoC verification is completely different to IP. We can’t just take the methods used at the IP level and stretch them.”
SoC verification gap
AMD fellow and methodology architect Alex Starr used the metaphor of a bookshelf with the bookends of UVM-based IP verification at one and production software for post-silicon work at the other, sandwiching a verification gap in the middle that needs to be addressed. “How do we target the stuff in the middle?” he asked.
Starr said: “We have to make outputs familiar to the various stakeholders and have inputs from the various stakeholders. The IP guys know how that should be programmed. The driver guy knows how some of the software works. But no-one knows how the whole system works. How do we write verification that provides info from one level to another? There are a lot of challenges of how we capture information about what a system needs to do.”
Starr pointed to situations he had encountered where a language barrier between software and hardware teams led to important test cases, such as the behavior of direct memory accesses (DMA) in a multicore system, almost going unchecked.
“We were getting questions: has this feature been verified? These were software features. So the verification people said ‘we don’t know what you are talking about’. There was a gap there. We identified this gap was on language and understanding. Once we had discussed and worked out what each feature meant we could pick the most appropriate engine for verification,” Starr said.
Binyamini said: “We’ve heard about these teams with no common language. What we find is that we can provide them a way to talk in use-cases from UML. That is an industry-standard way to capture use-cases. Even executives can understand it because it’s very intuitive. And you can communicate with any engineer with a reasonable background. It is a way for all of these stakeholders with their own languages to communicate and understand.”
“We need to take more from the software perspective,” said Sanjay Gupta, director of engineering at Qualcomm, adding that it would be useful for hardware verification engineers to be versed in languages such as C for use in tests.
Starr said that, for the moment, AMD is using hybrid schemes that mix virtual prototypes and emulation with simulation to try to achieve the performance needed to test the SoC as it moves towards tapeout and software completion.
The virtual platforms provide the software teams with access to early versions of the hardware for their development and may provide a basis for a greater amount of verification, performed using software rather than HDL test vectors that test chip features at a higher level.
Gupta added: “Today they are almost exclusively used entirely for software development and nothing else. Even the cycle approximate models are fairly good enough to do some sort of performance verification. But they are heavily underutilized.”
Binyamini said as well as using faster mechanisms for simulating designs, through to emulation, efficiency gains could be made at the SoC level by moving to software-driven tests. But rather than simply booting an OS and running applications, the software tests should be generated from specifications in a methodology that follows the V-model approach used in mission-critical software development.
“In moving to software-driven verification it is not any more just about functional verification on its own but about functional, power, and performance. And the only way to test performance and power is under specific use-cases,” Binyamini said.
Starr agreed: “Performance is a key part of functional verification.”
Although hardware could adopt more techniques derived from the software world, Binyamini said longstanding hardware approaches could be extended equally well into the software and systems domain: “One thing that’s similar to an IP verification approach in the SoC domain is the need for constrained random coverage driven verification.
“We should be bringing constrained random verification into software and software into hardware verification as we move beyond UVM.”
Similarly, coverage-driven verification needs to move up to include software-driven schemes, said Adnan Hamid, founder and CEO of Breker Verification Systems. “Coverage is everything and it needs to cover the software. When we look at product issues and go through deep analysis maybe some of them are in firmware.
“If you measure coverage on the software you are using you may find you are not covering much of your design that will be needed for the critical set of functions in the feature. We need to find out are the minimum set of tests for those feature sets. We are in the first stage of this technology emerging.”