An overview of the Open Source VHDL Verification Methodology and two of the libraries it uses.
Many years ago, when the designers of digital circuits first started verifying their creations in simulators, ‘directed tests’ were the norm. These tests were at the heart of a simple verification methodology: predict all of the test patterns required to exercise the design fully, encode them into a testbench and run them in a simulator.
That methodology no longer works because most digital circuits have reached the size of complete systems. Nobody can predict and correctly encode all imaginable test patterns in the time available for most projects these days, so we need verification libraries that provide pre-defined, well-tested methodologies that speed up testbench creation and guarantee reliable test results.
Hardware verification languages introduced a methodology called constrained random stimulus generation – also known as constrained random testing (CRT) or constrained random verification (CRV).
With CRV you apply random (but realistic) test data to your design and check the output for correct responses. Unfortunately, even a well-constrained random test pattern will contain a significant portion of useless data, which may produce confusing results. In other words, even with constrained random inputs you may have problems recognising meaningful outputs.
Another popular methodology is functional coverage (FC). In this approach, you set the goals of the test (aspects of design functionality that must be verified) and feed a long stream of random data to the design until those goals are reached.
FC data collection uses ‘coverage bins’ – data structures that group strategic ranges of values for key design variables. The goals of the test are reached when the amount of data collected in these bins reaches a preset threshold. While using FC to analyse output can help manage random tests, adding feedback from the FC unit to the random test generator creates a more powerful, intelligent test environment.
Until recently, both CRV and FC required either careful manual encoding, or the use of a specialised verification language. SystemVerilog was the first general-purpose language to provide reasonable facilities for CRV and FC, and so it has become accepted (in many designers’ opinions) that you have to use SystemVerilog to perform these advanced verification methodologies.
Arguably, part of that ‘acceptance’ was as a result of how hard some EDA vendors pushed their SystemVerilog solutions. And why not? SystemVerilog was developed with verification in mind.
However, the EDA community is a lively place, with engineers instinctively seeking alternative solutions to any problem. It was during a webinar in early 2009 (arranged by Aldec and VHDL training partner SynthWorks) that questions were asked about pseudo-randomisation (and other) verification libraries in VHDL. [This was around about the time that VHDL 2008 was published, before UVM was available and when OVM and VVM were the talk of industry.]
That webinar resulted in a brief to develop advanced yet flexible verification methodologies in VHDL that would be independent of any vendor’s tools, in which a ‘methodology’ was regarded as a combination of library code, documentation and examples.
The result was the Open Source VHDL Verification Methodology (OS-VVM); announced in late 2011 by Aldec and Synthworks as a contribution to the VHDL community. OS-VVM includes both CRV and FC methodologies for engineers designing ASICs and FPGA-based applications using VHDL. So let’s consider CRV and FC in more detail.
Constrained random verification
CRV can be separated into two tasks – generating quality random numbers, and specifying constraints that produce a stream of data with the desired properties. For the first task, the key data source is a ‘true’ random number generator (RNG), which should meet two requirements:
- No bias (all numbers in the range should appear with equal probability); and
- No period (i.e. there should not be any repeating patterns in the generated stream of numbers).
However, it is difficult to create a generator with such properties. As a compromise, a pseudo-random number generator (PRNG) can be used. It keeps the ‘no bias’ requirement but reduces the need for a ‘very long period’; and generates a long, repeating sequence of equally distributed random numbers.
Popular PRNGs generate each new random number based on the previously generated number. This means the generators require a ‘seed’, a number specified by the user for the very first execution of the function. Selecting the same seed during each simulation run guarantees that the PRNG generates the same sequence of numbers.
VHDL is equipped with the random function UNIFORM, which is available in the MATH_REAL package. Since the function combines two internal generators, it requires two positive integer seeds. It generates pseudo-random real numbers normalised to a range between 0 and 1.
Based on the uniformly distributed output of the RNG, data belonging to different value ranges or showing non-uniform distribution can be generated. The process of imposing additional conditions on the random data stream can be implemented in two ways:
- Through specifying abstract expressions, or constraints, that restrict the output of the RNG, (using a special process called a constraint solver to adjusts the RNG output to meet the constraints); and
- Through specifying procedures or functions that transform the output of the RNG so that it meets all requirements.
Note: the constraint solver approach is used in SystemVerilog. It brings a high degree of flexibility, but an overloaded solver may slow simulation. The second approach is used in the universal package introduced later in this article.
Coverage is a way of measuring how much of a design specification was exercised during simulation. The quality of the coverage results depends heavily on the test plan. For example, 100% coverage means all the features selected for testing were tested. This means that FC cannot be fully automated, as the test and the analysis of its results have to be carefully prepared.
FC complements other kinds of coverage, including:
- Code coverage, which tests the structure of the code and only implies correct design behaviour when complete; and
- Property coverage, which adds a new layer of description to the design – properties representing desired behaviour of the design – and checks if everything was properly tested.
FC also collects samples of values of selected design variables that reflect design functionality. By distinguishing important values (or ranges of values) of those variables you can identify what the design should be doing at a given point in time.
Consider a climate control unit. Depending on the output of a temperature sensor the unit will be switching on a heater, a cooler or staying idle. Based on the value of a state register, the unit’s microcontroller will be reading a program, reading data, writing data, etc. By checking if enough samples of values were collected, FC can determine whether all the critical design functions were exercised.
Counting every value of each design variable would take too long so, as mentioned above, FC uses the concept of bins, data items that represent ranges of values of a given variable. Simple FC is one-dimensional, e.g. the coverage bins for the temperature sensor might have just one value (0oC) or a range of values (20 to 25oC).
It is also possible to perform ‘cross coverage’ using multi-dimensional bins. For instance, you could exercise a display using bins for individual pixels (e.g. x=3, y=7) or areas of pixels (e.g. x=8 to 15, y=32 to 39). For each bin, a testbench author can assign a minimum sample count or a minimum percentage that signifies complete coverage.
Note: code coverage and property coverage features are typically available as add-ons for most simulators (i.e. licenses are required), but FC implemented in OS-VVM works on any standards-compliant VHDL simulator and requires minimal coding.
Experienced VHDL users can create their own implementation of CRV and FC but it is not a task that can be completed quickly, which is why the subject of flexible VHDL test packages was raised during the webinar in early 2009.
Jim Lewis, director of training at SynthWorks Design, rose to the challenge and developed two very useful packages, namely RandomPkg and CoveragePkg, which are now at the heart of the Open Source VHDL Verification Methodology (OS-VVM).
- RandomPkg provides functions for uniform randomisation in a range or in a set and for discrete, weighted distributions. Constraints are formed by calling randomisations within sequential code. Within OS-VVM this package should be used to further refine the initial randomisation done to cover the holes in the functional coverage.
- CoveragePkg simplifies the modelling and collection of high-fidelity functional coverage, in both point and cross coverage variants. It provides methods for interacting with the coverage data structure. One feature, ‘Intelligent Coverage’, provides for randomisation across coverage holes and is a foundation of the methodology.
What are the benefits of Intelligent Coverage? Let’s consider an elementary example. When an RNG produces uniformly distributed values in the range 1 to n, it takes approximately n·log(n) trials to cover all values in the range. This number is even higher for non-uniform distributions. By randomly generating only those values that have not yet been covered, smart coverage reduces the number of trials to n. Put more plainlyanother way, smart coverage guarantees that each value in the randomised range will show up only once.
The EDA community is rich with engineers seeking practical solutions to problems and the development of the OS-VVM is a prime example of that in action, as questions asked in a webinar prompted the development and launch of advanced tool-independent verification methodologies in VHDL. Although VHDL does not have built-in commands supporting CRV and FC, it is possible to implement them using VHDL packages, ensuring that complex design and verification can continue to be achieved using one language.
The OS-VVM can be found at www.osvvm.org, where a dynamic community is contributing to the development of the methodology and the VHDL language.
About the author
Jerry Kaczynski is a research engineer at Aldec