Scaling automated software testing with Virtualizer Development Kits

By Victor Reyes |  No Comments  |  Posted: April 14, 2016
Topics/Categories: Embedded - Integration & Debug, EDA - Verification  |  Tags: , , , , , ,  | Organizations: ,

Victor Reyes, technical marketing manager, System Level Solutions group, Synopsys.Victor Reyes is a technical marketing manager in the System Level Solutions group at Synopsys. His responsibilities are in the area of virtual prototype technology and tools with special focus on automotive.

The growing code content of embedded systems is making it increasingly important to do as much software testing as possible, as soon as possible, in order to deliver a high-quality product within tight timescales.

Developers use several forms of software testing, each with its own purpose and limitations in terms of how soon it can be applied in the development process.

Unit testing is used to check small blocks of code in isolation, so that they can be used later as reliable building blocks for more complex code. Unit testing can happen very early in the software development process, because it doesn’t rely on the target system having been fully implemented.

Integration testing checks that a combination of pre-tested software blocks works as expected. It focuses on the interaction of software blocks with each other and their environment. Integration testing usually happens in layers, from bottom up or top down. It eventually requires access to the target hardware to run code during what is known as hardware/software integration testing.

System testing evaluates whether a complete system of hardware and software acts as defined in its specification. It may involve tests that evaluate function, performance, and security, using techniques including large regressions, fault injection, and stress tests. System testing needs access to a complete hardware/software implementation so it can only happen late in the development process, when the costs of fixing any bugs that it reveals can be high.

Acceptance testing verifies that a solution works for the user, in as realistic a context as possible. Acceptance testing often has legal, contractual and certification implications.

One of the most important techniques in the developer’s arsenal is regression testing, which helps show whether a change in one part of the software affects other parts of it in an unexpected manner. Common regression testing strategies include re-running test cases to check whether previously fixed faults have re-emerged. This is especially important when one software platform is used in multiple products, or when bugs fixed in one configuration may produce side effects in others.

The challenges for developers, therefore, are twofold: to both bring as much software testing as possible as far forward in the development process as possible, despite not having the target hardware available from the outset; and to keep track of the ways in which bugs emerge and are fixed as the code evolves across multiple generations and end configurations.

The limitations of conventional system testing

Conventional system testing is usually done in special hardware labs using dedicated equipment and software testing tools. The automotive industry, for example, usually uses Hardware-in-the-Loop labs to perform functional requirement software testing. Although there are lots of commercial tools to manage both interactive and automated testing, this is still a cumbersome and expensive process.

Software tools that support this type of testing are scarce and companies rely on in-house test infrastructure, which enables some test automation. One example is the Linaro organization and its validation labs. This consortium of companies is automating the testing of Linux and Android-based software on ARM-based devices. Linaro has its own testing infrastructure and software testing framework, called LAVA.

Regression testing is made more complex by the need to use complex hardware labs for system testing. Many embedded products live for years and need patching long after their system and acceptance tests were completed. Since hardware labs are so expensive, they are widely reused, leading to resource conflicts when, for example, testing the current product gets in the way of using the labs to develop system tests for its next generation. Hardware needs to be reconfigured, reconnected, and retuned repeatedly to match the requirements of different product versions.

Increasing the size of hardware labs to deal with the demands of multiple teams, with multiple product variants and increasingly complex test suites, is expensive, in equipment, operating and maintenance terms. This makes hardware labs big and relatively inflexible investments.

Virtual prototyping: an alternative

In unit testing, the target device can be abstracted away by tools or infrastructure to enable the software to run on a standard PC. In integration testing, on the other hand, the software layers closest to the hardware have to run on a target device. System testing, in turn, requires even more details of the final system to stimulate and analyze the software.

This gap between these approaches can be bridged using virtual prototypes (VP), that is, simulation models of the target hardware that executes on a PC host. The VP model is detailed enough to allow the integrated software (compiled for the target device) to run unmodified, and yet does not have the inherent limitations of hardware with regard to its scalability, flexibility, control and determinism.

Synopsys combines a virtual prototype with the relevant tools for hardware and software analysis and connections to the third-party tools, and calls the result a Virtualizer Development Kit (VDK). These can be used for:

Early hardware/software integration: This makes a model of the target hardware available early, shortening the software development process.

Early creation of system tests: System testing requires access to hardware, but using a VDK means test engineers can develop system tests before it is available.

Pre-running system tests: A VDK can be used to do many early test runs. A complex system may need tens of thousands of functional tests, and so running many such tests in parallel on multiple VDKs can reduce test cycles dramatically.

Recreating complex scenarios: A VDK can be useful for complex tasks such as doing fault injection and running stress tests without risking the hardware. It is also often easier to set up a VDK to mimic a particular test case than to drive the real hardware into that situation repeatedly.

Regression testing: A VDK-based test framework can be archived and restored each time a change needs to be tested to search for regressions. The same version of tools, models and configuration can be used for years after the product is released. Many configurations can be used in parallel and time isn’t wasted reconfiguring hardware to swap between them.

Deterministic debugging at the system level: If a test fails, using a VDK enables debugging to happen with full control of the software, hardware and environment.

Integrating VDKs in a software testing ecosystem

A VDK must first fulfil three important requirements:

Completeness: A VP usually focuses on the digital hardware of the target device that is directly touched by the software. Other board-level components can be functionally modeled and integrated into a VDK, but these models have to abstract away analog, mixed-signal and electrical effects. A VDK for use in testing has to be complete enough to allow the full software stack to execute without problems.

System completeness: System testing using VDKs requires that access to a model of both the target device and its physical context, so a VDK should connect to the external tools and models necessary to mimic the overall system.

Test reuse: A VDK must be flexible enough to integrate with existing test frameworks, so that investments that have already been made in developing test suites can be protected.

More info

For additional information, please view the webinar, “Better Testing Through Automation and Continuous Integration with Virtualized Development Kits”

Author

Victor Reyes is a technical marketing manager in the System Level Solutions group at Synopsys. His responsibilities are in the area of virtual prototype technology and tools with special focus on automotive. Reyes received his MsC and PhD in electronics and telecommunication from University of Las Palmas, Spain, in 2002 and 2008 respectively. Before joining Synopsys, he held positions at CoWare, NXP Semiconductors and Philips Research.

Comments are closed.

PLATINUM SPONSORS

Synopsys Cadence Design Systems Siemens EDA
View All Sponsors