RTL code coverage is used to measure the progress of SoC functional verification for simulation, formal property verification (FPV) and other formal techniques, but have you ever wondered about how code coverage differs between the two? There are clear similarities, but also large differences. The interpretation of the results is different, and by understanding and taking advantage of this difference, you can speed up coverage closure in simulation.
Commonly used code-coverage metrics include line, condition, toggle and finite state machine (FSM) coverage.
Line coverage measures which lines of the RTL code have been executed.
Condition coverage monitors whether certain expressions in the code evaluate to true or false. For example, see the Verilog statement below:
assign my_wire = test ? in1 : in2;
This statement is always executed so the line is always covered, but condition coverage requires that the ‘test’ variable takes both a true and false value, a more complete metric.
Toggle coverage monitors value changes on registers and nets, and measures whether the value has changed from 1 to 0 or from 0 to 1.
FSM coverage measures how many FSM states have been visited and how many transitions occur during verification.
Each line, condition, register and net which is used for tracking one or more of the coverage metrics is referred to as a coverage goal, and the coverage metric reports the percentage of goals covered.
The differences between coverage metrics used in simulation and formal verification are:
- Simulation measures whether a coverage goal is reached or covered
- Formal measures whether a coverage goal is reachable or coverable
This may sound like a small semantic difference, but it has large implications for the way in which coverage data is interpreted and used.
In simulation, the testbench determines the stimulus to be driven to the device under test (DUT), and a different value is applied every cycle. The longer the simulation runs, the more different stimuli are applied. Line coverage measures which lines in the design are executed or covered as the simulation is running. The goal is to reach 100% coverage for each coverage type, to show that the verification task is complete. Coverage is a negative metric, in the sense that 100% coverage doesn’t guarantee that the code is correct. Less than 100% coverage means there are parts of the design that are not verified so that you can be certain that verification is incomplete.
Line coverage in formal analysis determines whether it is possible to reach a line of code, i.e. whether it possible to create a simulation trace that will reach all lines of code in the DUT. If it is impossible to reach a specific line in the RTL code, or if a net or register cannot toggle, that specific coverage goal is marked as un-coverable. This means there is no simulation trace that can execute that part of the code.
For example, in the Verilog code sample below, line 9 is un-reachable because “srx_pad_i” must be true on line 6 so the if statement on line 8 will always evaluate to false. This means that there is very likely a bug in the RTL code.
Note that code-coverage metrics in FPV don’t measure the completeness of the verification task. Other metrics are needed to check for FPV completeness.
1: always @(posedge clk or posedge wb_rst_i) 2: begin 3: if (wb_rst_i) 4: counter_b <= 8'd159; 5: else 6: if (srx_pad_i) 7: counter_b <= brc_value; 8: if(enable & counter_b != 8'b0 & !srx_pad_i) 9: counter_b <= #1 counter_b - 1; 10: end
Here lies a key difference: coverage goals in simulation measure the efficiency of the test environment, but coverage goals in formal analysis measure properties of the RTL.
By understanding this difference, you can accelerate coverage closure in simulation by using formal analysis to identify which coverage goals are un-reachable. Simulating the code above can never reach 100% code coverage, so you’re wasting time trying to get there by running longer simulations or writing directed tests. However, the un-reachable code can be modified or excluded from the coverage metric so that 100% code coverage is possible in the simulation environment.
In addition, if a coverage goal is not covered in simulation but is reachable in formal analysis, this can provide a trace showing how it can be reached—improving coverage closure by guiding the development of an effective test case.