System-on-chip (SoC) designs are changing how functionality is defined. A system is no longer simply the sum of its parts. Earlier devices were built for a single purpose and performed their tasks in a fairly linear manner. Most of the functional blocks could be arranged to reflect the dataflow that happened between them and the function that was performed. Today, an SoC is a tightly coupled, multi-processor, heterogeneous computing environment supporting multiple concurrent tasks. The interdependence between these tasks is creating a new challenge – that of SoC-level verification.
Consider this scenario: Power management used to be a block-level issue. Clock gating and other low-level power optimization strategies were used in attempts to reduce the leakage. But low-level power management strategies are no longer enough. The industry faces a critical power challenge because areas on some chips must be kept in a power-down state to prevent overheating and thermal runaway. Power management has become an SoC-level problem.
The SoC verification gap
Existing verification solutions cannot address these problems . So, this SoC verification gap is emerging. We have faced similar ‘gaps’ before. They typically indicate that that a change in tooling, language or methodology is required. That is again the case here.
It is unlikely that existing methodologies, such as constrained-random test generation, will solve the problem. They struggle to keep up with block-level demands and suffer from declining efficiency and effectiveness.
They also cannot deal with the heart of these SoCs – namely, the processors. Today, most designers use directed tests for full-chip SoC verification. These tests take a long time to write and a lot of effort to maintain. Another solution is to run production software but this may not be ready in time and is not tuned for the purpose of verifying the hardware design.
To understand the way forward, we need to remind ourselves how designers look at the problem of architecting a chip and dealing with issues such as resource contention, interactions between pieces of the system, and functionality that involves multiple processors.
First, they use dataflow diagrams to show how data moves around the system, concurrent pathways that data can take, and possible contentions. Then, they consider the control systems that are needed to regulate flows and allocate resources. Verifying the SoC should follow the same path.
In fact, the ideal solution is to capture the very thought process that those designers use to specify the system.
Verification takes inspiration
Our company, Breker Verification Systems, has adopted that process with TrekSoC. It takes those dataflow diagrams, captured as graphs, and uses them to automatically generate C code test cases that run on the processors and coordinate with activity on the primary inputs and outputs of the design.
Automatically generated code can orchestrate complex usage scenarios that would be difficult to create manually. As part of this generation process, it also can add randomization so that the corner cases not specifically considered, but permitted by the graph, can be verified.
The SoC verification gap will prove to be a temporary phenomenon that exists only until tools that solve the problem become common-place. The good news is that tools are already emerging, meaning that the gap can be filled as soon as a company realizes that it has the problem.