How the right DFY flow enhances performance and profit

By Bruce W. McGaughy, ProPlus Design Solutions |  No Comments  |  Posted: May 30, 2014
Topics/Categories: EDA - DFM, IC Implementation  |  Tags: , , , , ,  | Organizations:

‘Design for yield’ is a familiar term, but the challenges in today’s increasingly large projects make a refresher on what it offers particularly timely.

We have been talking about design for yield (DFY) for some time. But as we have deployed more nano-scale technologies within giga-scale designs, its importance across the design flow has grown considerably. It is probably fair to say that while most engineers have heard the term used, a recap of what DFY is and how it is evolving is worthwhile. Armed with this perspective, you will make better decisions as to how it should influence your methodologies and design strategies for advanced nodes. Your decisions will be better both in terms of the techniques used and where they are deployed in the flow.

That is the main purpose of this article. It explores the core DFY concept, how it has been realized and how techniques have evolved according to the greater demands presented by newer process geometries and design goals. The purpose is to allow you to achieve higher performance designs, better profitability by time to market, and higher yield.

What is Design for Yield?

DFY incorporates the way in which process variations are characterized and modeled as worst-case corners or as statistical models. The impact of process variations on circuit performance and yield are then analyzed during design in DFY simulations. This can include such techniques as process, voltage and temperature (PVT) corner analysis or Monte Carlo statistical circuit simulations.

DFY’s integration within the flow is a key enabler of trade-offs between, on one hand, yield, and on the other, power, performance and area (PPA). Done well, DFY should not only optimize yield but also improve design robustness and simultaneously performance.

To address the continuous increase in process variation at successive nodes under reduced supply voltages and its growing impact on circuit yield, designers already turn to DFY simulation and analysis tools. They look to these tools to instill confidence in the design before it is sent to fabrication, and mitigate the seven-figure and higher cost-risk associated with poor yields at 28nm and below. 

Traditional DFY and the move beyond 3 sigma

The core DFY technology has been Monte Carlo analysis. But Monte Carlo is today considered useful only for small designs and where engineers need only analyze a small number of samples because of long runtimes. For designs that must meet a 3-sigma standard, Monte Carlo remains viable. However, the industry is increasingly looking at so-called ‘high sigma’largely in the 4-6 sigma region so far, but with architectural changes such as the move to 14/16nm finFET designs potentially moving that out to 7-sigma and higher. Verifying yield to these levels using Monte Carlo analysis would require billions or trillions more simulation samples.

The consequent move toward high sigma analysis aims to greatly reduce necessary sampling while maintaining accuracy. A useful blog on this concept can be found at http://tiny.cc/pxjhex and we will review some further examples of its application below. It is worth noting that some particular examples of design already lend themselves well to the high-sigma approach. One example would be structures that are used repetitively where the impact of local variation needs to be verified at the end of performance distribution.

DFY models

Let’s explore a few areas where DFY methodologies have been deployed and are evolving at foundries, integrated device manufacturers and fabless design companies. 

SRAM in the yield ramp

Foundry engineers use DFY to assist SRAM yield ramping when they introduce a new process technology that may need to undergo many iterations of improvement. Specifically, SRAM is often used as a vehicle for fine-turning and ramping parametric yields, after all possible defect-related issues have been resolved..

This process has historically been based predominantly on prior experience. It has lacked a systematic approach to fine-tuning for a more holistic optimization. The engineers have needed to run large sets of process splits over multiple iterations. This is costly and time consuming, and while an improved yield may well result, it can hardly be guaranteed to be the optimum. Moreover for shrinking technologies, margins have become smaller for large-size memories.

In short, traditional SRAM yield ramping is no longer practical for advanced process nodes.

Statistical SRAM analysis

By contrast, statistical SPICE simulation-based DFY tools can now run through different combinations for full process and design optimization within only a few hours, delivering better yield results and a faster ramp. For example, engineers can use DFY tools to analyze SRAM performance measures, such as Iread and read/write margins, and thereby determine the impact of process variations on the functional yield.

Figure 1 shows a high sigma SRAM yield analysis that results in a contour plot as a function of different parameters process engineers can tune. The user can run a trend analysis to indicate in which direction the yield can be improved. Silicon runs with split lots can then be used to validate the results of this analysis. In this illustration, the high sigma SRAM yield analysis contour is a function of Vth_lin (pull up transistor) and Vth_sat (pull down transistor). The high sigma SRAM yield trend is a function of Vth variation of the pull down transistor.

ProPlusA

ProPlusB

Figure 1. Contour plot for high sigma SRAM yield analysis (Source: ProPlus Design Solutions)

This methodology is augmenting foundry silicon yield validation programs with a faster high sigma sampling-and simulation-based technique. It makes more advanced nodes viable, offers more savings on time and wafer cost, and leads to more optimized yield results than traditional non-simulation based approaches. Experience shows that such a methodology can save months of process fine-tuning for a 28nm SRAM yield ramp.

Libraries and DFY

Libraries are the basic building blocks required for today’s more complex SoCs. They are another example of where a DFY methodology can be widely deployed.

Library designers must start development early in the lifecycle of a new process. They will often work through multiple versions of the process design kit (PDK) provided by the foundry to ensure that their blocks are available as soon as a new node is released to market.

Library designs can be repeated anywhere from several to millions of times in an SoC, making them ideal candidates for high sigma analysis. Because yield characterization requires multiple simulations and iterations as the PDK is revised, high sigma analysis can greatly reduce time and simulation license costs. It provides accuracy when compared to ad hoc approaches such as extrapolation based on a limited number of Monte Carlo analyses.

Embedded SRAM

Embedded SRAM takes up the largest footprint in many high-performance SoCs and inevitably has a large impact on chip cost and yield. SRAM design is a good example of where DFY can play a significant role.

An SRAM bitcell typically requires a high yield depending on the size of the SRAM. For example, an SRAM bitcell might need 5.7 sigma+ yield to achieve a 50% yield for a 64Mb SRAM. This would imply a need for more than 10-billion samples using Monte Carlo techniques. Conversely, advanced high sigma analysis only requires a run of several thousand samples to achieve 6 sigma+ within minutes.

The same methodology applies for SRAM peripheral circuits, such as a sense amplifier. It may have fewer counts than the bitcell but still need 4-5 sigma yield. Once the cell-level yield analysis is done, designers can run block-level yield analysis for an SRAM array with or without I/O circuits that set the requirements for high capacity and a high number of variables to be supported by the high sigma technology.

proplus2.jpg.001

Figure 2. High sigma analysis results for a SRAM array, an IO block and a filter array

High sigma analysis is ideally suited to repetitive structures with high on-chip replication rates. Examples include standard cells, decoder circuits, hit logic, flip flops and dynamic latches.

DRAM and high-sigma analysis

DRAMs are another good candidate for high sigma analysis. The high repetition of DRAM cells in gigabit chips sets the need for a cell-level yield of more than 7 sigma. Further, analysis of the cell in combination with peripheral circuitry imposes a need for high-dimension, high-capacity, high sigma analysis where a large number of variables needs to be considered for the analysis. 

Analog and high-sigma

Analog circuits for medical, automotive, military and aerospace applications also have high sigma design requirements. Historically, analog circuits have not been instantiated in SoCs and, when they are, analyzed only to 3 sigma. However, tight bit-error-rate (BER) and jitter budgets such as Fibre Channel are now pushing design margins out to high sigma levels.

In cases where jitter needs to be measured, a Monte Carlo approach would require a long and impractical simulation run. DFY analysis tools with a giga-scale SPICE simulator as the engine can help reduce the number of simulations required to obtain a meaningful yield estimate. They maintain simulation accuracy at the SPICE level, and reduce the time for both yield analysis and simulation.

Conclusion

Engineers need to take a careful look across their current methodologies as new process technologies such as FinFET become more prevalent. Many have already concluded that they need to explore more fully the inclusion of robust and advanced DFY techniques to improve design robustness and optimize yield.

The use cases and technologies described here will enable you to follow suit, if you have not done so already.

About the author

Dr. Bruce W. McGaughy is chief technology officer and senior vice president of engineering of ProPlus Design Solutions. Dr. McGaughy received a Bachelor of Science degree in Electrical Engineering from the University of Illinois at Urbana/Champaign and a Master of Science and Ph.D. degrees in Electrical Engineering and Computer Science from the University of California at Berkeley. He has conducted and published research in the fields of circuit simulation, device physics, reliability, electronic design automation, computer architecture and fault tolerant computing.

Prior to his current assignment, he worked for Integrated Device Technolgy (IDT), Siemens and Intel. In 1997, he joined Berkeley Technology Associates, which eventually became Celestry and was acquired by Cadence in 2003. He has led the development of the hierarchical fast-SPICE simulator UltraSim since its inception at BTA in 1999.

More information

ProPlus Design Solutions
2025 Gateway Place, Suite 130
San Jose
CA 95110
USA

T: 1-877-386-9839
W: www.proplussolutions.com

Comments are closed.

PLATINUM SPONSORS

Synopsys Cadence Design Systems Siemens EDA
View All Sponsors