The infrastructure for design for test (DFT) could look quite different in five years time compared to the situation designers have today as chipmakers wrestle with the problems of yield control, safety in automotive and related markets and of device-aging effects across the board.
In a panel at the Design Automation Conference (DAC) on the future of DFT, AMD senior fellow Jeff Rearick said structured DFT is unlikely to go away but the pressures on SoC design will lead to other forms of test strategy being adopted. “Structured DFT is well understood but it has undesirable physics,” he said, referring to the timing and congestion issues that the insertion of scan chains at the gate level can have on a design.
“What’s the alternative? I think it’s something we thought of 30 years ago: ad hoc DFT. Design for test could be much more broad than what we do today. We can use infrastructure already on the device to test. Designers may say: ‘Three are things I can do for that’. But the details are left as an exercise for the reader,” Rearick quipped.
“I am not going to tell AMD engineers to take off scan chains. But if told you ten years ago you are going to have the better part of half a million scan flops and we are well on the way to tens of millions of scan flops on there you would never have believed it. Well? We’re there. Can we come up with a better way to deal with that?”
IBM Austin Laboratory researcher Anne Gatticker argued: “The future of DFT is going to be driven by the future of computing, which is where we move into the cognitive era.”
Efforts by server-farm operators such as Amazon, Facebook, Google, and IBM have greatly improved the utilization of computer resources. A decade ago, a server might run at full capacity for only 20 per cent of the time because peak loads only arrived intermittently. Today, operators have embraced container and orchestration technologies that allow them to start, stop, and move workloads quickly depending on demand and available capacity.
“We get statistics on how often server chips are active. And today they are very tired. They are doing more work, more often. The slop that we had in the past is increasingly going away. With that, some of the forgiveness is going away and we will start to encounter more subtle defects,” Gatticker explained.
In servers, automotive and other systems where sudden hardware failure is not acceptable, SoCs will need to use more on-chip instrumentation that checks the logic for defects more regularly, Gatticker suggested.
“Because of ISO 26262 we have to figure this out,” Rearick said. “We have to figure out how to test in the field at power-on. Logic BIST has problems toady but smart people will figure out other ways to do it.”
The regular on-chip tests performed in the field might provide additional information for ongoing yield analysis if failures that appear sometime after installation appear to be systematic.
DFT is already being used extensively for yield monitoring and remediation, said David Park, vice president of worldwide marketing for manufacturing-test supplier OptimalPlus, and that role is becoming more prominent.
“We believe that foundries and manufacturers can bring test data all the way back to the EDA tools to diagnose issues around yield. It will make DFT much more active,” Park said, noting that the types of test currently used to track process problems during wafer or die test will start being used at the package and board level.
He pointed to the likely increase in the use of multichip modules. “The modules will have expensive die and also inexpensive die. Being able to balance the quality of those for the end device will be important,” Park said.
Joe Sawicki, general manager of Mentor Graphics’ design-to-silicon division, said: We need to make sure we are capturing subtle defects. There are new things we are doing like diagnostics. Large-scale diagnostics can drive yield up by significant margins. Some customers are running thousands of CPUs per day to generate the diagnostics.”
Easing yield analysis
As decoding the information from structural DFT into a form suitable for tracking yield issues is difficult enough, unstructured test is likely to increase the level of complexity in analysis. Sawicki noted developments in structural test will be needed to provide better diagnostics, particularly within cells. “That will get better in a reasonably short time. But getting to know whether it’s a systematic, that’s something we don’t know yet.”
The DFT data could feed into design by showing which layout patterns are more likely to lead to manufacturing problems.
Sawicki added: “The opportunity space is still rich in terms of problems to solve. For some people, test is becoming part of the functional spec. Power-on self-test is a required part of ISO 26262.”
There are still things to do in structural test, Sawicki said: “A couple of things are happening. We are driving the insertion from gate up to RTL level. And doing things like hierarchical test so you don’t end up generating test patterns for a full billion-transistor chip at once and have everyone knocking on your door asking ‘is it ready yet?’.”
Rearick argued rather than being seen as an unwelcome cost to each die and to the schedule better DFT techniques will be welcomed by management. “If you can show you save the company money, you are a first-class citizen. With a little incremental DFT you can improve yield. And you will win.”
Sawicki agreed: “In situations where companies have had a yield crash and they used diagnostics to deal with it, DFT engineers became their favorite people.
“Because of factors like these, we are seeing a resurgence in interest in test-point insertion. By inserting test points you can get levels of test that are more attractive,” Sawicki claimed.
Rearick said trends point to a resurgence of system-level test as well with “finer and finer screens”. The result is likely to be much more development in DFT on the way to 2020 and beyond.