Revealing the hidden cost of performance for physical verification

By John Ferguson |  No Comments  |  Posted: March 1, 2007
Topics/Categories: EDA - Verification  |  Tags:

The increasingly onerous nature of physical verification at today’s nanometer process geometries requires the regular benchmarking of appropriate tools, if designs are to be realized in a cost-effective manner. However, the criteria for such benchmarking are all too often limited to relatively simplistic notions of ‘performance’.

The article explains that the real cost of physical verification tools can only be fully analyzed by also considering a number of other factors that also have a significant impact on the final price-tag. Foremost among these are four areas beyond performance.

  1. How the tool licenses are structured.
  2. The implications for hardware that accompany the choice of tool.
  3. The breadth and nature of tool support and the number of sources from which it is available.
  4. The costs involved in training, adoption and migration for the EDA software.

The foundations for addressing these criteria is considered in detail to help project managers make a more fundamental assessment of what will be the most appropriate choice for a design project.

Turn-around-time (TAT) and shrinking project schedules are the biggest headaches that repeatedly dog physical verification engineers. This data needed to accomplish their step in the design flow has taken a severe shift up and to the right, especially at the nanometer process nodes. The data volume for a design layout can easily run into the gigabyte range; sometimes it can be tens of gigabytes (Figure 1).

Figure

Figure 1. Design data has ramped sharply for nanometer design

Not surprisingly, the need for a rapid TAT is generating a great deal of interest in the run-time and performance characteristics of individual EDA tools. This is especially true of physical verification, and regular benchmarking is therefore needed to evaluate the performance claims of various tools. However, any benchmarking limited solely to performance criteria can give misleading conclusions. It is vitally important to evaluate the ‘total’ cost of a physical verification tool.

The physical verification flow consists of design-rule checking (DRC), layout versus schematic (LVS), and more recently yield compliance analysis and correction. This is self-evidently a strenuous task. The amount of data to be processed is, as noted, massive. At the same time, the number of rules in a physical verification flow is increasing dramatically for each subsequent node. This all adds to the already over-taxed physical verification step (Figure 2). In the past year or so, EDA vendors have sought to improve performance by focusing on the use of parallel processing for the physical verification task. This approach provides significant improvements in run-time, but also adds significant cost. To be more specific, any determination of the cost of a parallel or distributed solution should feature each of the following components. All contribute to something beyond the basic price tag.

  1. Tool license;
  2. Hardware;
  3. Tool support;
  4. Training, adoption and migration.

We now need to evaluate the impact of each member of this quartet in greater detail.

Figure

Figure 2. The complex challenge of physical verification

Tool license cost

The license cost associated with physical verification is very often measured solely on the basis of running full-chip verification in what the user considers a ‘reasonable time’. As a result, many EDA vendors promote license packages that enable access to many processors. There are two typical approaches to licensing and packaging parallel processing solutions:

  1. The sale of a main license required to invoke a job plus an additional license that allows for the use of multiple processors (the number of processors that can be used varies, but often increases with price).
  2. The re-use of existing licenses (in many cases, each additional license enables a reduction in run-time by adding access to one more processor, but other scenarios include enabling access to larger numbers of processors with each license). Too often, the cost of completing a full-chip run can be a misleading measure when comparing different license models. A far more accurate accounting for license cost looks at the maximum number of licenses required throughout a design flow.

While full-chip verification runs are generally longer, other points in the design flow often require more licenses (e.g., the licenses associated with the verification of standard cells, IP blocks, or macro blocks).

Very often, IP cells and blocks are developed and verified in parallel.With many traditional verification tools, small standard cells or analog blocks can be run to completion in minutes on a single CPU. In this situation, by leveraging multiple CPUs in a distributed environment, or even on multiple desktops, many IP blocks can be verified in parallel. Therefore, the number of licenses required can be estimated by the total number of jobs being run. Newer verification solutions that focus on performance improvement through parallel processing are often limited in this regard. Quite often, single CPU run-times are not suitable. In these situations, multiple CPUs are required to achieve the expected run-times, typically of the order of minutes. This applies even for smaller components. In such a situation, the number of jobs that can be run in parallel is now diminished because there are fewer CPUs available for running additional jobs.

Hardware cost

As is the case with licensing, any true understanding of the total cost of physical verification from a hardware point of view must address the maximum number of processors needed over the course of a design flow. Once the number of processors required for each tool is understood, then the total cost for each tool can be calculated.

It is very important to consider not just the price per processor, but also additional one-off and recurring costs. Below is a list of costs to consider with respect to hardware.

  1. Total cost to purchase appropriate hardware. This can vary dramatically depending on specific requirements, such as:
    1. Number of process nodes;
    2. CPUs or cores per node;
    3. RAM required per node;
    4. Disk storage required;
    5. Network configuration required;
    6. Rack housing architecture to hold the nodes.
  2. The recurring cost of supporting the hardware incurs elements such as:
    1. Hardware vendor support cost;
    2. Internal IT support cost.
  3. Licensing and support costs associated with grid queuing and allocation software.
  4. Recurring electrical costs to power the processor farm configuration.
  5. Fixed and recurring costs associated with cooling the processor farm.
  6. Real-estate costs to house a processor farm.

The fixed cost of the hardware can vary greatly depending upon the specific requirements of the verification software in question. In the past few years, the cost of processors has fallen significantly, and this trend appears to be continuing.

Costs for a processor farm vary depending on the architecture of the CPUs in question and the number of processors. By and large, however, the dominant cost for hardware systems today is dictated by the RAM required for each node. The total amount of RAM that can be housed on a node is associated with the number of available DIMMs in the system. Because most housing units are limited in this sense, increased memory requirements involve a switch to higher density memory boards that significantly adds to the cost. In some situations, when extensive memory is required, it may be more cost effective to consider non-standard housing units which can hold more memory slots, and fill those with less expensive memories. In some situations, the RAM requirements for physical verification can be offset through the use of disk storage.While less expensive than RAM, disk space is also not free. These costs should be considered as part of the total cost.

Effective use of a compute farm should not be limited to one function, like physical verification. To make the most effective use of processors, it is important to consider all the design and corporate software functions that the farm can support. Users should work with qualified hardware vendors who have the expertise to ensure that they take best advantage of all their resources.

Tool support cost

For most EDA vendors, the cost of support is a fixed percentage of the list price for the licenses purchased. As a result, these recurring costs are easy to calculate and compare. Less obvious, however, is the return on the investment (ROI) that each vendor provides for the support dollars spent.

There are many factors to consider when calculating the ROI on support dollars. One is the level of access to support when needed. The most common form of tool support is still phone support. For each vendor, the ability to reach a live person when needed should be considered. Beyond that, one should also consider what other forms of support are available. If users can quickly find answers to their questions without requiring a lengthy discussion with a person, problems can be resolved more quickly.

Another requirement for effective support is a sufficient level of knowledge in the support team. Access to individuals that are not only tool experts, but also know the verification and tape-out flow will provide more valuable insights. Obviously, this level of expertise is more often available from vendors who have a long history in their market.

Support should also cover the efficient handling of problems or bugs associated with the software as well as access to useful enhancements in functionality. How well a tool provider performs in this situation can often make a dramatic difference to the user’s ability to successfully tapeout on schedule.

In addition to these factors, there is one other implicit factor associated with support. That is the availability of support from sources other than the EDA vendor. This typically comes in the form of access to experience and expertise from third-party partners and service providers such as foundries, intellectual property suppliers, and consultants. They can all provide additional channels that allow users to more quickly understand requirements and resolve issues. If tool performance, license cost, and vendor support considerations are equal, standardizing on the same tool used by these third-party partner/vendors can make a significant difference in the ability to tapeout on schedule.

Training, adoption and migration costs

When comparing alternative verification solutions, one also needs to look at the costs associated with training, adoption and migration. Again, there are several factors that contribute to the total cost of adoption. Three of the most significant are:

  1. Cost of training for the tool:
    1. Setup training;
    2. Rule writing;
    3. Use and debugging.
  2. Cost of tool qualification.
  3. Cost of integration into an existing design flow.

The biggest impact on training cost is based on the amount of new information required. This is largely linked to the amount of change involved in introducing a new tool compared to a user’s existing solution. Of course the least amount of training is associated with the historically used tool, or the tool most commonly known in the industry. By adopting software used extensively throughout the industry, future training costs may be reduced by selecting new design personnel from the more extensive pool of knowledgeable users.

In addition to training, the costs of qualifying, implementing and adopting a solution can be quite expensive. Implementation is most heavily impacted with the development of the DRC and LVS rule files. Rule files can come from many sources (e.g., foundryprovided rules, rules translated from historic tools, or rules generated from scratch from a process manual). Rules acquired from a foundry, or generated through automated translation tools obviously require less manual intervention. But these scenarios typically do not eliminate the need for rule qualifications. For the highest level of foundry support, one should consider which tool is used by the foundry itself and which tool is used by the majority of that foundry’s users.

Conclusion

In summary, the total cost of adopting a physical verification solution includes many factors. In general, the least expensive path is always the path of least resistance. The solution that requires the least amount of total change – in the form of license configurations, support infrastructure, hardware resources required, and qualification needed – is always the most inexpensive option.When comparing to alternate solutions, these costs should be measured carefully against any gains.

Mentor Graphics
Corporate Office
8005 SW Boeckman Rd
Wilsonville
OR 97070
USA
T: +1 800 547 3000
W: www.mentor.com

Comments are closed.

PLATINUM SPONSORS

Synopsys Cadence Design Systems Siemens EDA
View All Sponsors