Verification engineers look to better skills to beat schedules
Too much of a focus on point tools coupled with challenges with interoperability and cross-industry cooperation is hindering the ability of SoC teams to design and verify complex products. That was the view of verification experts on a panel at DVCon this week (March 2nd) who are concerned about the buildup of technical debt in verification flows and environments.
The panel moderated by Mythic senior design-verification manager Eric Decker pursued a number of themes from the role of open-source software in verification to AI but a common theme that emerged was a growing need for SoC design and implementation teams to think differently about how to make bugs easier to track down if not avoid them altogether.
Jason Sprott, CTO and co-founder of Verilab, said two key trends have come together that may force a change. “The biggest trend is the heterogeneous compute environment we have now.”
The combination of general-purpose and speciality processors that need to be integrated in an SoC and work together, often in tightly coupled configurations, has increased the complexity of verification. “With the speed of things now, where time to market is the big thing, there is not really much time to optimize the architecture, to simplify or make the SoC easier to design or verify. You’ve just got time to bolt it together. That adds a massive amount of complexity at the design-verification phase that’s not thought about at the front-end. Instead, it gets paid forward,” Sprott explained.
“What makes verification more challenging is most of this IP we didn’t design inhouse,” said Brian Murdock, SoC verification engineer at Cruise.
Sprott noted, “We still have quality issues with the IP we buy in. It’s not badly written but it’s not always in sync with the specification.”
IP out of context
A further issue is that IPs will often respond differently depending on the context in which they are used. “IPs are often built for certain use-cases. If you use for something a little different, that can result in issues,” Murdock added
“When you’re connecting IPs together, you will get new, emergent behaviors. You need to figure out what those are and how to verify them,” said Mark Glasser, verification architect at Cerebras Systems.
“For EDA, I worry that it’s difficult to find global solutions when the EDA companies are all looking at point solutions,” said Dan Romaine, senior manager of silicon design engineering at AMD.
Sprott added, “Sometimes we ask for the wrong things, such as ‘show the waves this way’ but forget we’ve made the design bad in the first place. We want to make it less likely that common-or-garden make it in the first place and to find them more easily.”
Glasser pointed to the growing use of Python for enhancing verification environments and making them more flexible. “The purpose of that is programmer productivity: get more tests done in a certain amount of time. We also need to make things more debuggable and make it easier to interrogate the design or the simulator during debug.
Python provides useful ways of abstracting away from raw data elements and signals with constructs for handling lists. “You can manipulate things more easily than with SystemVerilog, which is a fairly clunky language. We should spend some time improving that language and look at the kinds of abstractions available in Python,” said Glasser. The issue with Python is that lacks direct support for hardware abstractions for things like bitstreams, memories, and timing. To manage them with Python “we either have to fake those or extract services from the simulator”.
Murdock said greater adoption of open-source software in EDA would make it easier for users to collaborate in the way they do in the software world, although the panelists acknowledged the same culture is currently lacking in chip-design teams where most companies try hard to maintain proprietary frameworks and do not allocate much engineer team to supporting open-source software. Most of the development of open-source EDA is happening either in the RISC-V community with organizations like the Chips Alliance or in academia, with DARPA’s recent interest helping to boost activity.
“UVM is open source and that didn’t just happen. A lot happened to make that a reality. UVM is successful because all the vendors have adopted it,” said Glasser, adding that one issue with UVM today is that within the Accellera infrastructure the number of people who can contribute is limited. “More people need to be able to contribute to it.”
One problem with company-focused infrastructure is that it can restrict the value of learning novel techniques and methods that would lead to improvements in timescales and quality, panelists argued. “We’re not even sharing the simple stuff. How many different scripts does the industry need to wire modules together?” asked Decker
Sprott pointed to transaction recording as an underused tool in verification for not entirely obvious reasons. One appears to be a lack of knowledge of the value of these kinds of tools coupled with difficulty in making the skills learned in their use transferable.
“With my firmware hat on, I’m dealing with GCC and make. I get to take my skills with me if I move companies,” Murdock said. “With chip design, it’s likely to be a different [vendor’s] simulator.” Though the simulators will be designed to work with UVM, they may miss functions others have or use different commands and approaches.
“The portability of skills is important. How interested people are in learning them is about how they can take the skills they learn elsewhere,” Sprott added, pointing to the growing use of Intel’s DPC++ library as an example of something that has spread across the industry because it can be used on a variety of targets. “All the guys see value in that type of things and that’s why it seems to catch on.”
Leave a Comment
You must be logged in to post a comment.