‘Shocking’ quality sees vendors organize around RISC-V verification

By Chris Edwards |  No Comments  |  Posted: July 14, 2022
Topics/Categories: Blog - EDA, IP  |  Tags: , , , ,  | Organizations: , ,

A false sense of security among some of the design teams now incorporating various RISC-V cores into their SoCs has led to several of the EDA vendors stepping up efforts to encourage them to do deeper verification.

The openness of the RISC-V ecosystem means that designers can choose between implementing their own cores from the instruction set architecture (ISA) specifications all the way up to buying readymade cores that have been verified in much the same way as the cores from traditional processor IP vendors such as Arm. In that middle ground, it has emerged that design teams do not realize or do not care to perform the levels of verification that processors, which have enormous state spaces compared to the glue logic around them, traditionally receive.

“Some of the quality of the RISC-V out there being implemented is frankly shocking, and that includes people who should know better,” claimed Rupert Baines, chief marketing officer at processor-customization specialist Codasip. “They are basically running ‘hello world’ to check it doesn’t crash as their test. They often haven’t appreciated how much work has gone into verifying the processor core they have bought from Arm and others.”

With freedom, responsibility

Kevin McDermott, vice president of marketing at Imperas, said: “With RISC-V we see massive design freedom. Now everyone can be a processor architect. There are many people who know the theory of processor design and are motivated to do it. Practically every team now is going to be touching RISC-V even if they only use it to provide simple minion cores.

“The challenge is with verification: where are the processor verification experts? Many of them are inside the processor-design companies,” added McDermott. Adapting to the world of processor verification involves some changes to approach for some teams. “We recommend that the engineers read the specs and design the RTL but in addition document what they intend to achieve. Don’t say the specification is the RTL.”

Codasip has aligned with Imperas and Breker Verification Systems to bring together tools and methodologies to verify RISC-C implementations more exhaustively. Imperas has been instrumental in assembling a testbench architecture that supports both open-source and commercial tools that is built around UVM and SystemVerilog components.

“Imperas provides the big picture: checking you haven’t broken something in the specification,” Baines said. “Breker is focusing on lower-level issues, such as cache coherency and other more traditional digital verification tasks.”

Breker said it will deploy its test-synthesis products to generate tests to exercise the logic controlling cache-coherent RISC-V clusters as well as perform checks for security and power management.

Tutorials and methodologies

At DVCon earlier this year, Imperas gave a tutorial on methods for verifying RISC-V implementations, emphasising the importance of using lock-step comparisons between the RTL and a golden-reference model, which can be the one built by the company, in a test environment that also supports the injection of asynchronous events such as interrupts. The testbench architecture that Imperas favors uses UVM to manage the tests and allow the collection of statistics for coverage analysis at the instruction level. For instruction generation, users can write their own tests or use tools such as Google’s riscdv to provide a constrained-random stream of opcodes and data references. The components inside the instruction-oriented testbench coordinate through a now standard interface: the RISC-V verification interface (RVVI).

“It means we can develop a methodology that’s consistent and helps you work with others and share with others if you want,” McDermott said. “We also don’t want people reinventing the wheel and doing unnecessarily different things.”

The use of instruction-level coverage, McDermott said, helps ensure interactions between execution engines and the surrounding logic does not cause errors to go unnoticed. “You may have a nested sequence of interrupts that trigger a bug. And when an instruction returns from an interrupt it may not return to where you expect. You need to ensure that the instruction has run and completed as expected. This makes coverage on instructions that are run vital.

“Also, bug often don’t live in isolation. They tend to fester. So you will want to see how the first test found that bug and expand it to cover close neighbors. This is why it is important to have a good verification methodology,” McDermott said.

Comments are closed.


Synopsys Cadence Design Systems Siemens EDA
View All Sponsors