20nm design is fraught with problems for analog designers but one that causes the biggest headaches is variation in pattern density, Joachim Kunkel, general manager of the solutions group at Synopsys explained at the IPSoC conference in Grenoble, France today.
It’s not as if the 20nm node is short of obstacles for the mixed-signal designer but many of them can be handled with a change in architecture or design style. Maintaining consistent density between two parts of an analog circuit that should, in principle, be matched can involve a negotiation between IP and SoC designers, which makes the provision of off-the-shelf IP even harder to achieve.
“Up to the 28nm node, things were pretty consistent,” Kunkel said. “Life was difficult but still reasonable for our engineers. Beyond that point the world changed. The process of doing analog design in 20nm versus 28nm is very different. The biggest change is that the rules for design for manufacturability have changed dramatically and become more restrictive.
“The most important thing for us was to be able to analyze layout dependencies.”
Even so, Synopsys produced no less than 30 test chips for the 28nm node in order to get its tools and IP up and running. Similarly, test chips have provided the basis for the company’s 20nm work and uncovered a number of issues, some of which were expected, such as patterning issues, and others, such as the density problem, which were not so obvious.
Designers working on 20nm processes get an early taste of the kind of device-size quantization they are going to experience with finFETs. At 20nm, fixed sizes are mostly to do with patternability – only legal device sizes and shapes can be guaranteed to print accurately on the wafer. This is not good news for analog designers.
“What you do in analog design is mostly to do with device sizing,” said Kunkel. “When they are quantized, a digital designer will think ‘so what’. For an analog designer, it’s a disaster. The solution is to come up with different circuit architectures.”
It’s not just the size and shape. It’s how the devices are laid out. “Manufacturing rules are telling you in general what is the orientation of your IP. For hard IP macros, this is a major limitation. You often need to talk to the customer about where a core will go,” Kunkel said. “You can be a bit more intelligent about creating IP by using more automation.”
For example, rather than designing fixed macros, layout generators are used to create hard IP cores of the desired shape and pinout with scripts used to ensure that devices conform to the manufacturability and interconnect rules.
“We are trying to work with foundries and customers to see if we can at least achieve some form of commonality in metal stacks,” Kunkel added.
Performing the density fill to maintain consistent strain across transistors and to stop CMP processes causing problems has become much more problematic. The operation appears to be growing as compute-intensive as optical proximity correction a couple of generations back.
“When it came to density fill for 28nm we would do it at the end and it would take a couple of days. We are spending weeks doing density fills for 20 and 22nm for something as regular as a memory array,” Kunkel said. “The amount of compute power that we are burning is a factor of three, four, five times and in some cases is an order of magnitude higher that what we used to burn.”
“The main issue with density is how quickly it goes wrong. If you have to change something, it has a ripple effect on other parts of the chip and it ends up in a negotiation with the guy who does the SoC which is interesting,” Kunkel noted.