Integrated, comprehensive assertion-based coverage

By TDF |  No Comments  |  Posted: March 1, 2006
Topics/Categories: EDA - Verification  |  Tags: , ,


The emergence of the SystemVerilog and PSL assertion languages promises to improve the effectiveness of existing verification flows. First, assertions give better local observability of the functionality they represent. Second, the assertions augment the textual specification to provide a more formal, executable representation of the functionality. Third, since the assertion languages have common semantics for both formal and simulation-based environments, they provide a path to enhancing current simulation flows with model-checking technology. We propose that in the same way that coverage- driven flows require the measurement of code coverage to indicate how well the implementation has been tested, assertion-based verification flows require ‘assertion coverage’ measurements to indicate how much of the valid design behavior has been tested for. Specifically, an assertion coverage tool needs to identify:

  • How well the assertions have been exercised.
  • How complete the set of assertions is.
  • How extensively the signals used in the assertions have been exercised (linking with code coverage).

Assertion coverage easily identifies untested design behavior, and reports how well the assertions cover the intended behavior and its RTL implementation.

Complemented by test suite optimization, and a unique specification coverage tool, based on requirements traceability technology, the entire solution gives greater confidence in deciding when the verification process is finished.

Focused expression coverage as a corner stone

Central to the detailed measurement of assertion coverage is the metric of Focused Expression Coverage (FEC). FEC requires that every decision (i.e., every HDL branch) takes its two possible outcomes (True, False) at least once; that every sub-expression that comprises the condition controlling the decision/branch has taken its two possible outcomes at least once; and that every sub-expression has been shown to independently affect the decision/branch’s outcome. The requirements-based test controlling each decision/branch therefore has an explicit mapping to the HDL code structure.

In addition, FEC coverage is efficient because only 2N tests, at most, are needed to cover a condition, where N is the number of sub-expressions.

As an example of FEC condition analysis, consider a three-input AND gate (inputs: ‘A’, ‘B’ and ‘C’, output ‘D’). FEC demands that both outcomes are seen on D, so we must have all signals high to make the outcome true. Also, each signal must be seen controlling this gate to make the outcome false. Therefore the required test vectors to achieve 100% FEC condition coverage would be:


A B C => D
1 1 1 => 1
0 1 1 => 0
1 0 1 => 0
1 1 0 => 0

Advanced assertion coverage metrics

As code coverage does more than simply count the number of lines of RTL being traversed by the simulator, assertion coverage does more than determine whether a given assertion has been hit or not. Assertions represent a complex range of behaviors, and assertion coverage must feed back the extent to which this range of behaviors has been exercised. So, in addition to the classic assertion checking done by the simulator, TransEDA has introduced additional advanced assertion metrics:

Assertion step coverage

The assertion step coverage (ASC) metric counts the number of steps exercised in a temporal assertion. It is important that all assertions have all steps exercised at least once, to ensure that the whole range of behaviors they describe has been tested. The basis of step coverage is defined in the SystemVerilog language reference manual [3], but TransEDA has enhanced its capability to not only identify the traversed steps, but also take into account the coverage of the step conditions according to FEC analysis (assertion step condition coverage).

ASC Example

In the SystemVerilog assertion example below, there is an optional step where signal ‘d’ can be high for up to seven clock cycles, before ‘e’ is seen high:


(1) a #1 b |=> c #1 d [*0:7] #1 e || f;

Step coverage report (H=Hit, M=Miss at that position on the line above):


a #1	b |=>	c #1	d[*0:7] #1	e||f;
Step Coverage = 4/5 = 80%

Although the simulator may report that this assertion has triggered and passed every time, step coverage can show that the step where signal ‘d’ is high has never been taken, therefore more testing is required to gain confidence that the design behaves correctly in that scenario. Additionally, condition analysis may identify that the assertion has never seen signal ‘f ’ high to complete the final step and more tests need adding to exercise this behavior.

Assertion variable coverage

Assertion variable coverage (AVC) measures how well the signals and variables referenced in an assertion have been exercised. This metric automatically provides the link from the assertions to the parts of the implementation that control that behavior. Given an assertion, it identifies the assignments to each signal or variable referenced in that assertion. Code coverage information can then identify those assignments that have never been exercised during simulation. It is essential that all assignments to the signals be exercised at least once to give some confidence that those assignments do not cause assertions to fail. In addition to a simple ‘hit or miss’ test on each variable assignment line, we can provide more detail by using FEC to measure the coverage of terms in the condition for making that assignment (assertion variable condition coverage).

AVC example

Looking again at example (1):


(1) a #1 b |=> c #1 d [*0:7] #1 e || f;

Even though we have identified that every time this assertion passed, it matched with only signal ‘e’ high on the final step, there could still be assignments to signal ‘e’ in the RTL code that have not yet been exercised.

Variable coverage report for assignments to ‘e’:


1 132 e<=0; (rst)
13 140 e<=0; (!rst)&&(s==`idle)
5 156 e<=1; (s==`rdy)&&(c||(m&&n))
*0* 162 e<=0; (s==`abort)
3 1 74 e<=1; (s==`strobe)&&(c)
Variable coverage = 4/5 = 80%

It can be seen here that the assignment on line 162 has never been exercised during simulation; it is therefore necessary to add more tests to exercise this behavior.

Condition coverage on the assignment on line 156 may identify that the condition for taking that assignment has never been satisfied with the sub expression ‘(m&&n)’ being true, and specifically that ‘m’ is responsible because it has never gone high whereas ‘n’ has.

Assertion fault coverage

Assertion fault coverage (AFC) measures the capability of your set of assertions to identify potential errors in the behavior of your design. It can indicate whether or not you have enough assertions to consider errors on particular signals, and also indicate which errors any particular assertion can detect. A typical error detected by an assertion would be when a signal is required to go high at a certain time but stays low instead. If no assertion is sensitive to that signal at that time, then that fault may go unnoticed.

TransEDA’s patented AFC technology is based upon mutation coverage research [1] and works by injecting individual signal faults into a copy of the assertion and then determining if that mutant property can be made to fail. If it can, then the assertion is proven to be sensitive to such a fault.

Although these proofs can be obtained by simulation, the use of formal technology is much more productive.

Whereas assertion step and variable coverage metrics are specifically for measurement of simulation activity, fault coverage also assists in formal driven verification flows.

AFC Example

To demonstrate fault coverage we will use a simple FSM as an example. It will clearly illustrate how a single assertion, although suitable to test the most sensitive part of the state machine (the conditional toggle), does not ensure that a mutation on one of the state variables is detected.

A second assertion needs to be defined to capture the rest of the behavior and complete assertion fault coverage.

Specification: • “While A is low, B will toggle between high and low, otherwise B remains high”>


RTL Code:
case (B)
1 : if (!A)
B <= 0;
0 : B <= 1;

Faults to consider:

  • A assigned to 0 in error
  • A assigned to 1 in error
  • B assigned to 0 in error
  • B assigned to 1 in error

1st Assertion:

!A && B |=> !B

…is sensitive to (…and why):

  • A assigned to 0 in error
    • !0 && B |=> !B can fail
  • B assigned to 1 in error
    • !A && 1 |=> !B can fail

The remaining 2 faults cannot be detected by this assertion because their mutated assertions cannot trigger and therefore cannot fail:

  • A assigned to 1 in error
    • !1 && B |=> !B cannot fail
  • B assigned to 0 in error
    • !A && 0 |=> !B cannot fail

We therefore write another assertion to test for these faults. 2nd Assertion:


A || !B |=> B

…is sensitive to (…and why):

  • A assigned to 1 in error
    • 1 || !B |=> B can fail
  • B assigned to 0 in error
    • A || !0 |=> B can fail

These are the only faults detectable by this assertion – the others cannot be made to trigger it. Therefore with both of these assertions, we know that our assertions can detect either signal taking the wrong value.

Formal technology integration

With formal tools available that work from the same assertions and system constraints that are checked by the simulator, there has never been a simpler way of introducing the power of formal checking into your existing verification methodology. But in a methodology that uses simulation and formal checking, we would like to have a unified view of coverage on the design, so how do we complement our simulation metrics with formal technology? The answer is through coverability checking.

Coverability allows the formal tool to eliminate by proof, coverage points that cannot be reached by the simulator. These uncoverable points are filtered out of consideration for the coverage metric. Such a coverability check involves automatically formulating a reachability problem, which is then passed to a formal engine for it to prove if that coverage point can be reached. It may be that certain sub-expressions are simply not attainable because of redundancy in the expression, errors in the implementation or constraints on the inputs to the surrounding block. Coverability applies to both code coverage and assertion coverage.

Figure 1. Dynamic verification results for step coverage from TransEDA’s Ascertain tool

Figure 2.Summary of dynamic verification results from Ascertain

Assertion step coverability complements assertion step coverage by identifying steps, or terms within a step’s condition, that cannot be achieved in the design. Similarly, assertion-variable coverability complements assertion-variable coverage by identifying assignments to assertion’s signals, or terms within the condition for making that assignment, that cannot be achieved.

We have already mentioned that assertion fault coverage would be most effective when driven by a formal checker, but we can consider another formal metric – assertion reachability – for immediate assertions in particular. Assertion reachability considers whether it is possible to start an assertion matching. For immediate assertions (in SystemVerilog) this requires proving that the code branch containing the assertion can be executed. The equivalent check for temporal assertions is the trivial case of proving that the clocking event can occur (without any disabling condition being true).

Test suite optimization for assertion coverage

Test suite optimization complements the assertion-based verification flow by providing the ability to identify the set of tests that gives the most assertion coverage in a certain part of the design. This can save time when bugs are fixed in the RTL. The most important tests for the affected part of the design can be run first in an attempt to minimize the simulation time. Test suite optimization provides traceability from the assertion coverage, back to the test benches. Closing the loop with the original specification

In order to manage the wealth of coverage information, TransEDA has introduced a tool that provides complete traceability of coverage information between:

  • Specification & test-plan documents
  • Test benches
  • Assertions (and assertion coverage)
  • RTL implementation (and code coverage)

This tool allows designers to navigate from one file to the relevant part of another by jumping across links that have been put in place by the user, or derived from the coverage information. These links take the form of textual tags that either appear in sections of the document, or are added as comment blocks where the associated assertion exists in the code. The first benefit of this system is that each assertion can be viewed as if it was part of the specification, even though it resides physically within the code. Secondly, by using the additional link that assertion variable coverage identifies, there is now a direct relation between the requirements specification and the RTL code implementing that requirement. Additional links can be made from your specification document to a test plan document, which is usually referenced in test benches. The tool collects coverage information, and any other relevant results data by parsing textual reports, so is not tied to any vendor’s set of tools.

Combining these automatic and hard-coded links gives complete traceability between the specification, the assertions, the implementation, the test plan and the test benches. Providing such coverage traceability means that coverage deficiencies can rapidly be linked to the specification and the engineering change process is supported by ensuring that the full implications of a late design or specification change or a bug fix can be properly considered.


Assertion coverage provides a measure of how well a set of assertions covers the area of functionality that those assertions describe.We have defined here a set of metrics for measuring assertion coverage. These metrics are complemented by formal techniques to identify unreachable behavior; test suite optimization to associate coverage results with test benches; and by specification coverage to provide traceability between the assertions, the test plan and the specification. Accurately measuring coverage closure in an assertion-based verification flow is the only way to ensure that the verification task is complete and successful, and to quickly converge toward sign-off.


  1. H. Chockler,O. Kupferman and M.Y.Vardi, ‘Coverage metrics for temporal logic model checking’, In Proc. TACAS, LNCS 2031, pp.528-542, 2001.
  2. P. S.Miner, ‘SPIDER Research and DO-254 Experiences’, FAA National Software Conference 2003,
  3. Accellera, SystemVerilog 3.1a Language Reference Manual, 2004,

Comments are closed.


Synopsys Cadence Design Systems Siemens EDA
View All Sponsors