Taking control of constraints verification

By Sarath Kirihennedige |  1 Comment  |  Posted: January 13, 2015
Topics/Categories: EDA - Verification  |  Tags: , , , ,  | Organizations:

Sarath Kirihennedige, Real IntentSarath Kirihennedige is director of technical marketing at Real Intent.

Constraints are a vital part of IC design, defining, among other things, the timing with which signals move through a chip’s logic and hence how fast the device should perform. Yet despite their key role, the management and verification of constraints’ quality, completeness, consistency and fidelity to the designer’s intent is an evolving art.

Why constraints management matters

Constraints management matters for a couple of reasons: as a way of ensuring that the intent of the original designers, be they SoC architects or third-party IP providers, is taken into account throughout the design process; and for their ability to enable better designs.

For example, It’s possible to use constraints to define ‘false paths’, routes through the logic that cannot affect its overall timing and so need not be optimised, giving the synthesis and physical implementation tools greater freedom to act.

Functional false paths are rare. But the ability to define a false path is often used to denote asynchronous paths or signals that timing engines don’t have to care about because they only transition once, for example in accessing configuration registers during boot sequences. Without effective constraints management it is easy to lose track of the rationale for particular constraints, and hence the opportunity for greater optimisation.

It is also possible to define ‘multi-cycle paths’, through which signals are expected to propagate in more than a single clock cycle. Designers use multi-cycle path constraints in two ways: to denote paths that really are functionally multi-cycle paths; and as a way around corporate methodologies that ban the setting of false-path constraints. In this scenario, designers define a multi-cycle path with a large multiplier as another way to relax timing requirements.

Multi-mode designs, for which different constraints may apply to particular paths in different operating modes, present another constraint-management challenge. It is easy to lose track of the rationale for each constraint in each mode, and to overlook potential conflicts between multiple constraints applied to the same path in different modes.

Constraints management challenges

Managing and verifying design constraints presents a number of challenges to methodology developers and verification engineers. The first is that of carrying forward a designer’s intent, expressed in the constraints that accompany the logic definition, throughout the design flow from abstract code through synthesis and related transformations (such as test insertion) to gates in silicon.

The second, in this age of increasing chip sizes and shrinking timescales, is ensuring that verification engineers aren’t overwhelmed with such large volumes of debug data that they are unable to analyse it effectively and act upon it quickly as they work to sign off the constraints.

These issues are not well addressed in today’s methodologies: designers often use custom scripts to check the properties of constraints, such as quality and consistency.

Formal approaches can be useful in this context, but because of their speed and capacity limitations, it makes sense to develop a process of stepwise constraints refinement, using a series of targeted analyses and interventions to address the simpler issues. This reduces the burden on formal tools when they are eventually pressed into service.

In this approach, likened by some to peeling an onion, verification engineers might start by checking that the existing constraints have been correctly applied to the design. The next step could be to define all the paths which can be safely ignored, using algorithmic approaches to find such paths and denote them by adding constraints to the design. For example, multi-cycle paths need a retention capability at their start and finish, so an algorithm can check for that. The algorithm needs smarts, though: a multi-cycle path may exploit retention capabilities from elsewhere in the design, such as a state machine that is driving it, so the analysis need to consider the path’s context as well.

These analyses can be done quickly, before applying formal techniques that risk delivering such detailed reports that engineers get overwhelmed. Effective constraints verification tools need to be able to categorise exceptions based on predefined principles, to provide a prioritised view of what’s important.

Ensuring consistency between SoC and block-level constraints

As the use of IP increases, constraints files are providing a useful way to ensure that the same timing budgets are not being allocated twice, once at the block level and once at the SoC level.

Checking for this kind of consistency throws up subtle issues. For example, an IP block may include asynchronous paths that are recognised within a block-level constraint. At the SoC level, though, the IP block’s asynchronous paths may not matter and so can be safely ignored. There’s a twist, though – if other signals within the IP block depend on these paths, then the original constraints on those paths should be taken into account after all.

The key is to be able to assess block-level constraints within the SoC context, which may be easier said than done if the SoC constraints file doesn’t include placeholders for these issues. For example, how do we promote an internally generated clock, derived from a signal on the IP boundary, up to the SoC level?

It is also important to remember a second form of consistency that needs checking – between blocks. Depending on the context in which a block is being driven, it may be considered as synchronous or asynchronous. If a tool regards one of the instantiations of the block as correct, it may see other instantiations in different contexts as incorrect – creating a reporting issue.

Conclusions

Given the importance of constraints in defining how an IC is meant to work, it is increasingly important that their quality, completeness, and consistency is properly verified, and that they are correctly applied throughout the whole design elaboration process.

The best way to verify constraints is to develop a step by step approach, tackling particular classes of issue at a time, supported by tools that can sort and prioritise theire error reports so that engineers can focus on the most important issues first. If these tools also help preserve the design intent expressed in the constraints all they way through the process, that is a bonus.

Author

Sarath Kirihennedige is director of technical marketing at Real Intent. Previously he was senior staff product engineer at Cadence Design Systems. Sarath has held product and marketing roles at Tera Systems, Mentor Graphics, and Exemplar Logic. He began his career as a hardware design engineer with YDK in Japan.

Comments are closed.

PLATINUM SPONSORS

Synopsys Cadence Design Systems Siemens EDA
View All Sponsors