DVCon Europe provides an opportunity to take a look at what may be the most important standard in verification to appear since UVM. The portable stimulus working group at Accellera is trying to develop a way to make verification much more reusable and efficient.
On Wednesday, October 19, a group of verification experts will give delegates an overview of portable stimulus, its underpinning and what it can do. Among them is Adnan Hamid, founder and CEO of Breker Verification Systems, a keen exponent of the graph-based description approach adopted by the working group.
Hamid says there are several motives that lie behind portable stimulus: “The primary motivation is saving money. The landscape of chip design is changing. Companies have to get chips done with fewer people.”
“The second motivation is communication: they are big SoCs being built by global development teams,” Hamid adds. “The third motivation is pervasive knowledge. A lot of knowledge is required to test each core. When you’ve placed a video decoder in 15 different SoC designs, how do you transfer the knowledge required to verify that decoder to each of those projects?
“Once I have that model of the decoder or a USB block I should be able to reuse it across generations of designs,” Hamid says, but that isn’t the case. “When I walk into a chip house. They say ‘what gets my goat is that we changed only 5 per cent of the logic from a second-generation core to the third. But we had to completely redo the tests’.
“The idea for portable stimulus is that we should be able to write a model that describes the behavior of the core. I should be able to then compose the unit models into a test for the entire system. I should be able to take the model for a DSP and take the UVM block for that and then combine with other models into a subsystem. Then combine the models for USB and other modules and then go to full chip. The problem today is that with the technology in use widely, composeability is not at all possible,” Hamid explains.
There is another dimension to verification that portable stimulus is meant to address, Hamid adds: “To do full-chip verification you need to switch over from the UVM testbench to so-called software-driven verification: with a C program running on the core [that employs the same stimuli]. We call this vertical reuse.
“The other aspect is horizontal reuse. Maybe I can get a full-chip simulation running, but it’s so big it’s barely ticking over. We want, in those cases, to go to emulation or a silicon prototype. They are built by people with very different backgrounds to those who develop the simulations. But the requirements are the same.”
Hamid says the working group has aligned behind the same technique: “There are now multiple vendors coming into the portable stimulus group, and everybody is on graph-based. It says we need to move beyond UVM.
“The intention of graph-based modeling is very straightforward. With human beings writing directed tests, they are going through a goal-solving exercise: ‘I need to check X. To do that I need to make A happen’. So I set up preconditions to do that. Maybe I have to write an input to signal B. I start with what needs to be checked and then map out the paths to the graph that gives you that scenario. But you can put an entire set of test cases into one graph.
“Bringing graphs to chip design brings major productivity gains. Human beings react very well to graphs: they represent an amazing way to communicate. Naturally, graphs lend themselves to visualisation. And, finally, graphs lend themselves to analysis for things like static reachability,” Hamid explains.
“You are cruising along and looking for some bug in the design. There is perhaps some corner case to be tested so you put in a constraint that may interact with other parts of the testbench that results in much less coverage. If I put in a constraint and half of it goes red onscreen I know that something isn’t right. It gives you a better view verification intent that you didn’t have before.”
Although graph-based scenarios form the core of the portable stimulus approach, they need to be connected to the design. Hamid says: “You don’t want block level graphs to know anything about system level. So, the last part of this is the hardware-software interface. We need a set of abstractions to manage memory, registers, interrupts, and address transactions. These the the only four things we can work with at the SoC level. There is no other way to talk to the IP in the SoC. If I map those to the graphs, the tools can absorb the mechanics.”
He adds: “A register write is a simple sounding thing. But to do it, we may be sending a transaction to AXI. If we are writing over SPI might have to convert into an SPI transaction. They can go to wildly different places.
Such an interface is not a new idea. The UVM register access layer already provides way to talk about registers in an abstract fashion.”
Connecting the models with the verification environment requires a language and the working group has opted to provide a choice of two. One is a domain-specific language that, like SystemVerilog, has been developed for verification. The other choice is C++.
“Users want a C++ solution,” Hamid claims. “Four out of five stakeholders live in a software world. The UVM guy is on SystemVerilog. But the SoC integration guy: he is working in software environment. The next level is emulation guy who is also in a C++ world as is the post-silicon bring-up guy. And the last is the driver guy, who uses C++.”
Portable stimulus may not just be the way to extend the experience gained with UVM into system-level design but pave the way towards a verification regime that is tied much more closely to system-level concerns.
“At this point, we are beyond what the standards committee is discussing. But portable stimulus is not just important because it will save customers money. It’s important for EDA because it’s one opportunity to get past just hardware verification and into the software world.
“Once you really start thinking about portable stimulus you find you can have all these different stakeholders involved. They can all be writing tests and write them in a common format. And go to a framework where the back-end guys are building up testbenches where any test can run anywhere,” Hamid concludes.