Named after the Unix utility for checking software source code, Lint has become the generic term given to design verification tools that perform a static analysis of software based on a series of rules and guidelines that reflect good coding practice, common errors that tend to lead to buggy code or problems that can be caught by static analysis. When these rules are breached, the lint tool flags the potential bugs within that code for review or waiver by the design engineer.
In the hardware-design space, linting is typically applied to hardware description languages (HDLs) such as Verilog, SystemVerilog and VHDL prior to simulation. Today, the goal is increasingly to clean RTL before entering that increasingly lengthy process but lint tools are used to check for potential mismatches between simulation and synthesis. Some linting can also be undertaken at the gate level. Tools have also been extended to check other design and verification inputs such as assertions.
Linting is one of the most established static verification technologies, the broader concept having been originally developed at Bell Labs to check C code in the 1970s.
Typical lint targets
Earlier forms of lint were based primarily on heuristic techniques based on syntax analysis. The tools have gradually incorporated formal-verification techniques as these can identify problems not just with syntax but inside finite state machines to catch unreadable states and deadlock conditions. As a result, it is common to find suppliers who provide both linting and formal-verification tools, such as Real Intent.
Some idea of the breadth of problems linting addresses is given by this list of common lint rule topics, originally produced by Mentor Graphics:
- Unsynthesizable constructs
- Unintentional latches
- Unused declarations
- Driven and undriven signals
- Race conditions
- Incorrect usage of blocking and non-blocking assignments
- Incomplete assignments in subroutines
- Case statement style issues
- Set and reset conflicts
- Out-of-range indexing
Lint will also incorporate syntactical compliance checks against various coding best practice standards such as STARC and the Reuse Methodology Manual for design reuse.
Lint can be a highly effective tool when used pre-simulation. It can catch bugs without requiring specific test vectors and so reduce the number of simulation cycles needed to achieve coverage of a logic block. A further strength of lint tools is that the rule decks they have assembled contain decades of experience and knowledge. The sheer number of error checks, however, can make parsing the error reports time consuming and difficult.
It is ultimately the responsibility of the user to review the report generated by the lint tool and then decide which of the potential bugs can be waived and which need to be fixed. Because lint tools contain so many accumulated rules, designers continue to complain that they generate too many false positives. In this scenario, an obvious concern is that much of the simulation time saved may still be eaten up during analysis of the lint tool’s output.
There are then two further issues that designers raise.
First, as lint is based on accumulated knowledge, it is sometimes the case that a significant number of the checks are duplicates while others have become obsolete or unnecessary.
Second, designers say that while lint tools are excellent for checking compliance with coding best practices, they can lack the finesse to accommodate the subtle differences present in all in-house coding styles (sometimes also the differences between coding styles in different divisions of the same company). This comparatively long-standing complaint has gained greater force of late, as SoC designs have been increasingly dependent on IP supplied by third-parties – which, again, use differing coding styles.
As a result, designers say that they often have had to spend too much time pre-configuring lint tools to exclude or overcome these last two issues. However, companies within the EDA industry are responding to all of these criticisms.
At the most basic level, tool vendors have placed lint rules under close review to deliver the most compact decks that they can. They have also taken advantage of increasing computational power to reach a point where lint tools can analyze designs of, say, 300 million or more gates in a matter of minutes.
User interfaces have then been simplified so that it is much easier for designers to tweak a lint tool according to actual requirements.
Companies such as Real Intent and Atrenta, have decided to make the reports easier to use through the use of hierarchical reporting and integration.
Real Intent’s Ascent Lint addresses designers’ fear of being overwhelmed by the number of lint flags raised by prioritizing potential bugs in its reports “so that fixes will produce the greatest improvement in the quality of the HDL”. It also has debug hooks into the Synopsys Verdi platform that cross-probe the RTL to more closely identify where the lint flags are located. These themes form part of a ‘smart reporting’ concept that Real Intent is introducing across all of its products.
Atrenta incorporates lint within its SpyGlass platform, providing a methodology together with the lint rule sets. This, the company says, “provides an infrastructure for rule selection and methodology customization aligned with design milestones”. Atrenta’s approach is based on the idea that different rulesets apply during different phases of design. For example, it makes sense to run a check on synthesizable constructs before converting RTL to gates but different rules would be prioritized to perform state-mahine checks, for example, before simulation.
The result of these changes is that lint has become not just an aid to streamline verification and block-level RTL creation but part of the drive towards what is variously called ‘RTL sign-off’ or ‘SoC sign-off’.