At a panel session at the Design Automation Conference (DAC) in late June, Synopsys customers talked about some of the ways they make verification more efficient and bring technologies such as formal, emulation, and simulation together.
Mirella Negro Marcigaglia, design verification manager at STMicroelectronics, said the microcontroller teams now routinely use formal verification to check entire subsystems, such as serial controllers. “Our first choice is to verify digital IP using formal verification. We have been successful using only formal without a single simulation, but this is not the general case. We still need simulation.
“First we identify all the parts that we can verify in formal. Normally we also apply formal apps that perform protocol checks and formal register verification, using specifications extracted from IP-Xact. We then complement that with simulation. With Verdi Coverage it’s possible to integrate both formal and simulation coverage.”
STMicroelectronics also makes use of formal technologies for checking the robustness of IP cores. As the microcontrollers are often used in mission-critical applications, it is important to track down any latent faults that might only appear after months or years of use in the field. “We use Certitude. Once we have properties proven through VC Formal, by running Certitude we might be able to identify undetected faults. And we apply the same approach to simulation.”
Emulation’s shift left
For a number of the design teams, emulation and virtual prototyping have taken on increasing importance. Senthil Dayanithi, senior director of engineering at Qualcomm, said: “What we have seen over the past six, seven, eight years is the increasing use of emulation to deliver our products. We use the emulation box for designing validation and software development. Typically, the whole idea of using emulation is to shift left as well as to accelerate the validation of our SoCs.”
To enable the reuse of verification IP, Dayanithi said the team put effort into architecting transactors with high reusability. “We were able to use the same verification content on emulators, to move into the commercial emulation box as well as our own homegrown emulation box,” he explained. “We have real peripheral devices connected to our emulation systems. The key is a pushbutton flow with high-level software using real peripherals. It’s about having power for real vectors not power for short vectors. We use emulation for gate level and power estimation. By harnessing the emulation platform there is so much more we can do.”
Raju Kothandaraman, graphics hardware director at Intel, added: “As design sizes grow, emulation spending increases. We’ve had to think about how to use emulation boxes more efficiently.”
To assess the efficiency of emulation, Kothandaraman said the Intel team had adopted a number of metrics that include compile time, the effective clock frequency of the model, wall-clock efficiency, debug turnaround time, and the utilization levels of the emulator boards. “From a compile-time aspect, when we started we were able to do one compile a day. Now it’s four or five. But we are always aware that if compile time goes down, we need to look at what other bottlenecks remain. We spend time looking at logs so we can build emulation-friendly RTL.”
Kothandaraman said the team spends time checking and fixing long timing paths to improve model clock frequency: “We need to be constantly looking at reports to make sure they are addressed.”
To address wall-clock efficiency, the team tries to identify inefficient DPI calls and rewrite them as well as making use of offline debug and local memory where possible. “We are seeing a three to four-fold reduction in test execution and debug time,” he said.
“How are we able to do this on a day-in, day-out basis? This is about building a team to believe in how to do more with less. That requires a mindset change. To do it, you need to be able to express a clear set of goals. This is something just for Intel. We are also pushing the industry to go along with us,” Kothandaraman said.
Offline performance measurement
Andrew Ross, principal member technical staff at AMD, said maximizing the use of offline-debug techniques is helping designers understand the power and performance bottlenecks in the company’s graphic processing units (GPUs). He said emulation makes it possible to run full silicon workloads and find out how long it takes to render frames.
“What if that number doesn’t meet what we expect it to meet? Where did things go wrong? Where we are looking is actually a very large haystack. That sounds like a debug problem. With a debug problem we need better visibility inside the silicon,” Ross said. “In the RTL we have counters [for performance measurements] and we can expose those to software so we can collect a view of activity within the GPU. In emulation we can have something that’s identical but [if handled on the emulator] that would tie up resources. Instead, we take counters and dump them out using zDPI and go and analyse the results just like silicon.
“We can run representative workloads and get a view of what’s going on. We can take the data and feed it through a bunch of analysis engines: we have analysis engines for performance and power,” Ross added.
The result is a continuous, data-driven analysis regime that “augments simulation-based approaches”, Ross noted. “The significant value is in the teams it pulls together. It’s a combination of teams pulling analysis data well before traditional deadlines. Doing this presilicon we are not under the crunch of post-silicon timescales.”
Seonil Brian Choi, senior principal engineer at Samsung, said emulation is helping design teams deal with the feature changes made to smartphone SoCs even as they approach tapeout. “Because of mobile market changes, the customer often wants to have a different feature in the middle of the project. And this does not just happen once,” he said. “But our tapeout day never changes. So, we have less and less time to validate [the altered design].”
Choi said the Samsung teams employ a combination of emulation and virtual prototyping using four different platforms, each with a speed/bringup-time tradeoff, to accelerate verification so that changes can quickly be incorporated and tested. “We are building a lot of methodologies around emulation,” he said, which combined with cultural changes at Samsung mean “our engineers know change is going to come in the middle of the project. But they are not afraid of that”.