U2U links test and yield enhancement

By Luke Collins |  No Comments  |  Posted: April 5, 2012
Topics/Categories: Blog Topics  |  Tags:

One of the key themes at the upcoming Mentor U2U event in Santa Clara on 12 April is the interrelationship between test and yield.

A session on silicon test and yield analysis at the conference will build on work already presented at the Design Automation and Test in Europe conference in Dresden back in March.

At  a ‘lunch and learn’ session there, Geir Eide, product marketing manager for silicon learning products at Mentor Graphics, and Thomas Hermann, a product engineer for yield analysis systems at GlobalFoundries Dresden, discussed how the two companies had worked together to make it easier to relate yield problems to manufacturing issues.

Hermann said that as new process nodes are introduced, the time available to ramp the process to an acceptable defect density is being reduced, despite the challenges of process enhancements such as double patterning (Guide) and FinFETs (Guide).

The classic techniques for debugging processes, such as monitoring the yield on SRAM and logic test vehicles, MBIST and scan, statistical analysis of yield limiters, and physical analysis of defects, need to be enhanced with techniques that can distinguish  systemic yield issues from random effects. This is especially true as physical failure analysis is getting more difficult as device dimensions shrink, and deep submicron faults are beginning to behave in an increasingly analog manner, with some paths becoming resistive and others, leaky.

One approach to the problem is to develop a ‘manufacturing aware score’ that shows the actual contribution of tricks, such as adding redundant vias or increasing the amount by which a metal line overlaps a via, to improved yields. This reveals which design for manufacturing (DFM) techniques give  the most impact and so should be prioritised.

Eide went on to talk about diagnosis-driven yield analysis.

“Part of the challenge here is finding a defect, and finding the right defect that represents a systematic defect that you can do something about,” he said.

The traditional approach to doing this has been to add scan chains to circuits and use ATPG to create test stimuli that reveal when a scan register is holding an unexpected result, implying some sort of error in the logic leading up to that scan register, such as a long net that delayed a signal, a bridge between nets, or a  failed logic cell.  This is helpful for understanding where a fault is, but hasn’t been as useful for understanding why the fault happened, in other words, the underlying manufacturing issue.

What Mentor and Globalfoundries have been able to do is to correlate the layout, the net list and the fault classification to try and take that step towards understanding the ‘Why’, as well as the ‘What’, of a fault. Combining the diagnosis of a fault’s location, from the scan and ATPG data, with a map of DFM violations from a DFM analysis, means that the particular DFM rule violation at the fault location becomes another clue with which to diagnose the reason for the fault.

“It gives us an additional property, another bit of data to analyze,” said Eide.

Once logic faults can be ascribed to manufacturing defects, such as vias that don’t connect between particular metal layers properly or shorts between signals in a particular cell type, it is then possible to take the ‘root cause’ analyse up to the die level, to see if there are systemic issues across the design which cause a particular kind of fault, or even up to the wafer level, to reveal macroscopic process issues.

Since random defects are distributed randomly, dividing a wafer’s surface up into a variety of different patterns (strips, checkerboards, bulls eyes etc) can reveal nonrandom fault distributions. In one analysis, a fault involving a bridge between metal 2 lines was found to be concentrated around the edge of the wafer, suggesting that it had something to do with the way the wafer had been polished at its edges.

There’s more on this in white papers from Mentor Graphics here and here.

Chris Edwards adds:

In an afternoon session at Mentor’s U2U event, Shobhit Malik of GlobalFoundries will describe how yield learning using the input from test tools has been used by the foundry not just to improve the DFM rule deck but to help its fabless customers get better yield.

At DATE, Joe Sawicki, general manager of the design-to-silicon division at Mentor Graphics, described how test is changing to improve yield and across a range of processes.

Sawicki said: “We have one result on a mature node. It was a product had been running for three years on 90nm. We found a problem that was correctable on the manufacturing line that increased yield by 2%.”

Changes in fault models are helping to reduce false positives during test. Sawicki recalled an AMD design in which the defects per million were pulled down through the application of new fault models that are “cell aware”.

“Today, test tends to assume you can localise fault at the pin level. What if there are faults that don’t show up at a pin? You have to drive two gates to zero to see if a gate is functional or not. So we are adding new fault models to increase test visibility, as well as developing analog fault model to inject faults into cells to look at what the fault model needs to be,” Sawicki said. “This technology comes with a relatively low incremental increase in the number of vectors compared with traditional stuck-at fault models.”

Comments are closed.

PLATINUM SPONSORS

Synopsys Cadence Design Systems Siemens EDA
View All Sponsors