Using a ‘divide and conquer’ approach to system verification

By John Willoughby |  No Comments  |  Posted: December 1, 2007
Topics/Categories: EDA - ESL  |  Tags:

Today’s increasingly complex designs typically need to undergo verification at three different levels: block, interconnect and system. There are now well-established strategies for addressing the first two, but the system level, while in many ways the ultimate test, remains the weakest link in the verification process.

System verification normally begins only after a prototype board or SoC hardware is available, at a time where any bugs that are uncovered are also extremely difficult to fix. This flow also delays a major part of the software debug process until late in a project’s life: the performance limitations of RTL simulations include an inability to run any meaningful amount of software on them.

This article describes an alternative ‘divide and conquer’ strategy, also illustrated in a case study. This breaks down the system to enable the use of multiple hardware modeling approaches and takes advantage of readily available software layers. Using portions of the actual software to drive the hardware deepens the verification process for both elements of a design. The segmentation is also achieved with regard to time and resource budgets.

With the increased prevalence of system-on-chip (SoC) designs, the functionality of the system depends on both the hardware component and the increasingly significant software component. Verifying the hardware alone will not tell a project team if the design will work; only by running actual software on the design will the team know that it is, in fact, meeting the specification. Finding bugs or missing functionality after a chip has been fabricated can often be fixed with software, but at the expense of reduced system performance or dropped features.

Levels of verification

Figure

Figure 1.Design verification stages

Verification typically takes place at three different levels: block, interconnect and system (Figure 1).

At the block level, specific functions are being created and verified, typically with synthesizable register transfer level (RTL) code written in Verilog or VHDL. Assertions are increasingly being used to describe expected behavior. Both formal verification (primarily of the assertions) and dynamic simulation are commonly used to verify block-level functionality. Testing at this level is primarily white-box testing as both the function of the block at its primary interface ports is tested and the function of the internal logic is specifically tested. Performance of the verification tools is not typically an issue because the scope of the verification effort is relatively small.

After the blocks have been assembled to create a model of the complete hardware design, the next verification task is to determine whether the correct blocks were used and if they were connected correctly. Bus- and interface-level testbenches are normally utilized to verify the design at this level. Bus and interface monitors – usually created as a set of assertions – are used to validate the correct I/O behavior of each block. Formal analysis, particularly of the bus implementation, is still used but this technique becomes less effective as complexity increases. The workhorse for verification at this level is still simulation and directed random testing techniques are employed to create an exhaustive series of I/O, bus or memory transaction sequences. Performance becomes a challenge here, but most tests suites consist of many smaller tests that can be run in parallel on simulation farms.

The third level of testing is to add the software to the hardware model and create a model of the entire system. This is the ultimate test of the project and, no matter how much verification is performed specifically on the hardware, the project team has no way of knowing that the system works until the actual software is proven to run on it. System-level testing has presented a variety of challenges for project teams and remains the weakest link in the verification process for a number of reasons.

System validation challenges

Block- and interconnect-level testing is well understood and established methodologies exist. System-level testing strategies also exist, but these are typically placed in the flow after an expensive prototype board has been developed or when an actual SoC is available. Obviously, hardware bugs discovered at this stage can be difficult to debug and painful to fix. This also means that software debug does not start until the physical prototype is available.

Running software on the simulated RTL code is not usually possible because the simulation performance of an entire SoC modeled in RTL precludes the possibility of running any meaningful amount of software on the model.

One workaround is to develop high-level system models that can be used to debug the software. This approach can provide the performance required to run some level of software, but results in duplicate model development and adds risk to the project because the abstract models may not match the actual design.

A third path that designers are starting to follow, and one that will be highlighted in the following case study, is based on a ‘divide and conquer’ approach to system validation. It segments the system to utilize multiple hardware modeling approaches and takes advantage of readily available software layers. Using portions of the actual software to drive the hardware increases the verification of both components since neither can be fully tested in the absence of the other.

Case study

Figure

Figure 2. System model layers

Figure

Figure 3. Hardware/software and system testbench configuration

Figure

Figure 4. Simplified block diagram

The project team’s goal in this example was to perform system testing to validate some combination of hardware and software prior to the development of the first physical prototype. Instead of creating a physical prototype, the project team created a virtual system prototype (VSP) to perform hardware/software co-verification. Put simply, the team ran the software on the model of the hardware. Because the VSP had lower performance than the actual physical prototype, the team did not plan to run the entire software suite on the VSP, but rather to run lower-level software layers starting initially with the hardware abstraction layer (HAL) (Figure 2).

The HAL is the very bottom layer of software and provides the basic routines that interface with and control the actual hardware. Its purpose is to separate the application software from machinespecific implementations to make the software more portable across multiple platforms. The HAL exports an application programming interface (API) to the higher level system functions and serves to handle hardware-dependent tasks, including:

  • processor and device initialization;
  • cache and memory control;
  • device support (addressing, interrupt control, DMAoperations, etc.);
  • timing and interrupt control;
  • and HW error handling;

By utilizing HAL routines to drive the hardware model, the team in this project was able to validate the most important part of the hardware/software interaction: the part where the two domains interact directly.

In this model, they replaced the higher software layers with a system-test environment serving as the testbench (Figure 3). Of course, the testbench also needed to interact directly with the hardware, particularly when providing ‘stub’ interfaces to parts of the system that were not included in the model. In this case, the project team used SystemC to create the system-level testbench, although any high-level programming language could have been used.

The use of HAL routines provided the additional benefit of allowing the verification engineer to create system tests at a higher level of abstraction and raised productivity during test generation. Directed random techniques can be used at this level, and the combination provides a powerful environment to create system tests.

Transaction-level interfaces allow project teams to write transaction calls instead of manipulating individual signals, and each transaction call often results in many individual signal transitions over a period of many clock cycles. In the same way, moving up to the HAL API level allows a greater level of abstraction where a single HAL API call may result in a series of transactions.

Evening the performance

On this project, the hardware model represented something of a performance bottleneck, although a number of techniques could be deployed that helped overcome the problem.

The application included an embedded network interface, an MPEG decoder block and audio processor, various control functions such as bus arbiters and interrupt controllers, the CPU,memory controller, a USB interface, and the memory itself (Figure 4, p.15). The CPU was a pre-verified ARM core, which meant that the team did not need to re-verify that it worked. The processor model was replaced by an instruction set simulator (ISS) that modeled the CPU behavior in an efficient way at cycle level.

The other major blocks of the system were compiled using Carbon Design Systems’Model Studio to produce cycle-accurate compiled models from the ‘golden RTL’. These models included a number of optimizations that provided performance enhancements over the original RTL versions.

The platform development environment was able to take advantage of Carbon’s OnDemand feature to improve performance.

OnDemand monitors the activity in the compiled model itself by tracking inputs and storage elements in the model and identifying inactive or repeating behavior.When it detects that the model is inactive, it ‘shuts off ’ the model.

Figure

Figure 5. Block diagram showing inactive blocks.

Figure

Figure 6. Block diagram with Replay

By eliminating the activity in the model until it becomes necessary, the platform execution speed improves. During this particular project, each of the major functional blocks was compiled independently and was automatically disabled when not being used. For example, when the CPU was receiving and processing data from the network interface, the MPEG, USB and audio blocks were often all inactive and could be turned off (Figure 5).

The models in the platform also made use of Carbon’s Replay technology to accelerate the software debug cycle. The first time software was run, the model captured the behavior at the primary I/O points and stored it.When the same software was re-run, the captured outputs were used instead of executing the model itself. This continued until the first point where input signals were different from what had been recorded, indicating changes in the software or design. At this point, the execution of the model began and took over from the replayed responses. Figure 6 illustrates how the bus interface for the USB block was held inactive and the stimulus/ response information replayed from the previously captured data.

Results

By adopting an intelligent approach to segmenting system verification objectives, the project team was able to reduce project risk and accelerate the development schedule. They used a VSP to debug HAL software routines and hardware before committing to an actual hardware prototype. This approach allowed for more tests within the time budget, and the additional tests were of higher quality because they used the actual system software.While the higher-level software was not simulated, the software that did integrate directly with the hardware was verified and that provided the most important view into the combined hardware/software model.

Not every project has a specific low-level software layer such as HAL. But with software development planning and careful segmentation of the software into hardware-specific and hardwareindependent layers,many project teams can create a design that supports hardware/software co-verification and also increases software portability.

By taking advantage of performance techniques for the hardware model, including the use of cycle-level compiled models derived from the golden RTL, the project team in the case study was able to improve the performance of the entire hardware model. The end result was shorter project development time because of the early start for the software debug. The process also reduced risk by increasing both the amount and quality of verification performed on the hardware before any commitment had to be made to a physical prototype.

Carbon Design Systems
375 Totten Pond Road
Suite 100/200
Waltham
MA 02451-2025
USA
T: 1 781 890 1500
W: www.carbondesignsystems.com

Comments are closed.

PLATINUM SPONSORS

Synopsys Cadence Design Systems Siemens EDA
View All Sponsors