X propagation

Hardware description languages such as SystemVerilog use the symbol ‘X’ to describe any unknown logic value. If a simulator is unable to decide whether a logic value should be a ‘1’, ‘0’, or ‘Z’ for high impedance, it will assign an X. This causes problems for two reasons. The first is that an X may be converted inadvertently to a ‘known’ state by overly optimistic simulation code. The second is  that gate simulators can generate excess X values because they generally apply more pessimistic rules.

The situation is not helped by the traditional use of the X to express ‘don’t-care’ conditions for the purposes of synthesis, as well as for an unknown state in simulation.

X states can propagate through a simulation, multiplying uncertainty and potentially hiding bugs.

This has become a bigger issue in recent years because of the use of power-gating architectures to save energy. In this design approach, when the logic is powered down, it no longer provides a reliable signal to downstream logic, which itself may not have been designed to cope properly. This propagates errors through the chip when the blocks move between power states. However, any logic block that has not been properly reset may also generate X values.

Over-optimistic simulation

For example, a simulator will apply an X to any memory location that has not been initialized. Reset logic is expensive, particularly in terms of routing overhead, so it will rarely be feasible to apply a reset to every memory element at restart. Logic that has been designed with uninitialized memories and a weak reset strategy may be prone to more undiscovered bugs due to X propagation.

A common source of unwanted X optimism is when downstream logic states are assigned using ‘if-then-else’ or ‘case’ statements. Because the X state will not satisfy the logic test, the block will be assigned the default case. This may convert the X to a ‘known’ value or propagate it further into the simulation, masking a bug.

X pessimism can happen when signals converge, for example in a multiplexor. The simulator has to assign the X value to the output if presented with an X on an input that is not overridden by other known signals feeding into the block.

Simulation tweaks

There are a number of ways of dealing with X propagation. One is to analyze the waveform generated by a simulator – many simulators color-code these signals to make them easier to pick out. However, this involves painstaking manual inspection and design insight to work out whether the X is dealt with correctly or not.

Some simulators can be set up to generate random values in place of Xs, on the basis that differences in behaviour with otherwise identical input vectors should point to X propagation issues. However, the errors caused by X propagation can be subtle and only turn up in rare cases,  which may not be encountered during most simulation runs. Another possibility is more exhaustive simulation.

The ‘xprop’ simulation technology employed by recent versions of Synopsys’ VCS will replace every X it encounters with both a 0 and 1 to calculate all possible values, and then merge them to decide which value should be driven to the output. VCS employs a number of merge techniques to reflect different expectations, including a pessimistic approach more akin to gate-level simulation, and a more hardware-like scenario in which any output that cannot be merged to a known value is converted to an X.

The CVC simulator from Tachyon-DA takes the approach of changing default Verilog semantics to a situation in which it works on the assumption that any X should be treated as a 0, 1 or X.

X prevention

On the basis that prevention is better than cure, Mike Turpin’s seminal 2003 SNUG Boston paper “The dangers of living with an X” contained a number of recommendations for writing HDL that is more likely to avoid X propagation, as well as advice on verification techniques:

“For one-hot logic on a critical path, write the RTL directly in a sum-of-products form (rather than case) and add a one-hot assertion checker.

“Avoid using if-statements, as they optimistically interpret Xs. Instead use ternary (that is, conditional ?) operators or priority-encoded case statements.

“For case statements, cover all reachable 2-state values with case-items and always add a default (but only use it to assign Xs, to avoid X-optimism).”

However, for complex control logic, it can be difficult to be sure that these coding technique lead to an X accurate, rather than X optimistic or X pessimistic, design.

X detection

Some of these issues can be detected by linting tools such as Ascent Lint. Cadence Design Systems is working on a combination of formal techniques that will yield what the company currently calls “super linting”, and which will form part of a range of verification ‘apps’.

Formal verification can be used to exhaustively prove whether Xs in the design are unreachable, or will not have an impact on the correct operation of the logic. Techniques vary but in the main they separate into automatic property checking to prove that an X is unreachable, or interactive property checking to prove that the X is not stored in a register. Another possibility is to use automatic property checking to investigate code coverage reports.

Vendors have added specific support for X propagation checks to their formal-verification suites. For example, Mentor Graphics’ Questa AutoCheck is designed to automatically identify common errors on a push-button basis. For issues such as X propagation, deadlock, combo-loop and overflow, the tool generates the assertions that perform the necessary coverage checks without input from the user.

As part of its range of apps based on its core formal-verification technology, Jasper Design Automation has introduced one for X propagation checks. The app treats X-prone signals as a 0 and 1 to mimic hardware behavior. The tool provides a set of flows that checks the clocks and resets, control structures to see which areas of the code Xs can propagate to, and whether Xs can be propagated out of the design.

Real Intent has proposed a methodology that combines design-oriented checks, with the assistance of linting tools, with formal tools such as Ascent XV focused on the verification stages. The methodology is described by Lisa Piper in this article at Tech Design Forum.

January 7, 2019
Ashish Darbari is CEO of formal verification consultancy Axiomise.

Doc Formal: Introducing the ADEPT FV flow

Escape formal's narrower definitions with a flow that shows you how to Avoid, Detect, Erase, Prove Absence and Tape Out while avoiding bugs.
Expert Insight  |  Topics: EDA - Verification  |  Tags: , , , , , ,   |  Organizations:
August 28, 2016
Chips on a wafer

Addressing the verification challenges of complex SoCs

Three senior verification specialists talk about how they are navigating the challenge of verifying multibillion-transistor SoCs with limited compute resource, increasing coverage demands and shrinking timescales.
May 30, 2016
Anders Nordstrom, senior corporate applications engineer, Verification Group, Synopsys

Comparing your design to itself – a crucial part of verification

Sequential equivalence checking can help trap errors introduced by clock gate insertion, uninitialised registers, and X propagation issues.
December 16, 2015
How to expose X-optimism issues in ASIC and FPGA Design by Lisa Piper

Fix X-pessimism in netlists with practical techniques

Traditional approaches do not catch all unknown state sources, lack capacity for big SoCs and mask bugs. Ascent XV addresses and overcomes these issues.
July 20, 2014
Rebecca Lipon is the senior product marketing manager for the functional verification product line at Synopsys. Prior to joining the marketing team, Rebecca was an applications engineer at Synopsys working on UVM/VMM adoption, VCS, VIP, static and formal verification deployments.

Rethinking SoC verification

The argument for an integrated approach to SoC verification
July 3, 2014
Pranav Ashar

It’s time to embrace objective-driven verification

How Wall Street's vastly resourced IT teams already point the way to cheaper, faster and more efficient verification by putting goals not tools first.
May 28, 2014

Formal verification

As designs get larger and stress the ability of simulation to exercise an SoC, formal techniques have become essential parts of design and verification.
April 16, 2014
Pranav Ashar

Reset optimization pays big dividends before simulation

Reset is no longer simply an 'X' issue but also feeds into power optimization. Catching issues early greatly speeds verification.
Expert Insight  |  Topics: EDA - Verification  |  Tags: , ,   |  Organizations:
February 26, 2014

Catching X-propagation related issues at RTL

Catching x-propagation issues at RTL saves time and reduces uncertainty in gate-level verification
Article  |  Topics: EDA - Verification  |  Tags:   |  Organizations:
October 31, 2013

X propagation

X propagation within RTL simulations can hide fatal bugs. Uncovering and eliminating the effect improves design quality and avoids respins.


Synopsys Cadence Design Systems Siemens EDA
View All Sponsors