We’re going to hear a lot about the 20nm node at this year’s Design Automation Conference. That will surprise some, given the continuing scuttlebutt over 28nm, but the reality is that the industry has to keep marching down the roadmap. On one level, it’s a simple matter of economics; on another it’s about defining and maintaining your technological edge.
So, in recent days, we’ve already seen announcements such as Cadence Design Systems and STMicroelectronics completing a 20nm test chip, and Synopsys receiving Phase I certification on its physical design compiler, extraction and custom implementation tools from TSMC.
On the DFM side, Mentor Graphics Calibre products have been heavy hitters since their launch. This DAC has seen a slew of announcements including, again, TSMC qualification and specific deals on pattern matching and Smart Fill, more Smart Fill with GlobalFoundries and deeper engagement on computational lithography with Samsung.
We sat down with Joe Sawicki, vice president and general manager of Mentor’s Design to Silicon division, to get his take on some of the key aspects of 20nm, and how incoming nodes can also influence existing ones.
Q. So why this current wave of 20nm announcements, other than – obviously – companies time these for DAC? Still, 28nm has been a slog and you will get some people saying, “Sort that first”?
A. Well, you will get people who say that, but 28nm is starting to roll into production. We can argue whether this foundry or that is dead or doing just fine. But it is a process node that is going to ramp. Some people are doing well, some aren’t, but it’s gonna ramp.
So, 20nm tape-outs for test chips are starting to get loaded into the process right now. We’re working with companies in the mid-single digit accounts that are going through their first pipecleaning.
Then, it simply becomes a matter of timing – this is when you need to have this stuff ready. And when you start figuring out what all the new technologies are, there’s a fairly significant stack of stuff that’s coming in.
So how tough a node does 20nm look like?
I don’t think anybody can comment with absolute certainty at this point in time about 20nm being a good, bad or indifferent node. It’s just too way too early. But you can look at the indicators.
I’ve not done the detailed mathematics in terms of density constraints, but at least on some of the stuff I’ve seen, they look pretty reasonable. The nice part about it is that the only real major transition that happens – at least for the foundries – is that double patterning comes into play for certainly layers.
Though that is a funky technology and was a lot of fun for us to do on the OPC [optical proximity correction] side, it’s not like changing a transistor architecture, adding high-k dielectrics, or putting in place a new HKMG transistors. It’s got a relatively low implementation risk
Although, there are those who’d rather live without double patterning too. I’ve heard some senior guys on the foundry side say they’d like it to be a one-node only requirement.
“Does EUV work?” That’s what it comes down to. I’ve been saying this for years: “The day EUV works, everyone will use it.” It’s easier. The issue is that as of right now, it doesn’t.
You look at metal stacks these days, there’s some pretty heavy variability around what has to be achieved on a per-layer basis in terms of density. There are many layers at 20nm that don’t require double patterning because the half pitch is 80nm or greater. And so, even if you get EUV in place, I would still bet dollars to doughnuts that you will still have metal layers in the technologies – whether it be 14, 10, 8, sheesh maybe even 5.6 – that will be done with either a single or double patterning solution.
Triple patterning. Now that’s a little tough. We have to figure out the economics on that one.
Do people want to stop doing double patterning? You bet. They want an EUV machine for $80m that will do 100 wafers per hour and bob’s your uncle. Unfortunately….
Your Smart Fill technology is also getting a lot of traction at 20nm.
When you get down to the lower technology nodes, variability becomes an ever more critical issue. The smaller the features get, the more tertiary effects become secondary effects, and secondary effects become primary ones and they all come with their own variability profiles.
Fill starts to get really important because it’s all getting more aggressive. It starts to become something that has to be OPC-d as well. You can’t not OPC it because if you don’t, it might disappear.
Smart Fill is a really good product to do that effectively while maintaining reasonable design file sizes and without blowing up your manufacturing process.
Something interesting there: We started off with Smart Fill almost entirely targeted towards better planarity. What came out of work with TSMC, though, was that the file-size benefit was as important to them as fill quality.
Looking at the traditional methodology, they could see that it was just going to explode the file sizes. And because of the way we had architected smart fill, it could do a much better job of handling fill hierarchy and shape-based fill, all of which help you take more advantage of repetition that’s in there to decrease file sizes.
If there is a 20nm ‘call to action’ for the power users, what about everyone else?
There are parts of this story that are for more than just the big guys.
As it now really does run in, my sense is that 28nm is actually going to grab the second tier faster than a few of the more recent nodes have. There’s an awful lot of capacity being built for 28nm, so that once the yield ramps to mature levels, that will provide an incredible opportunity for people to go to an advanced node a year earlier than they might have done before.
Directly relevant to that, I’d point to some of what we’re initiating or have done at 20nm around [Mentor’s circuit and electrical verification tool] PERC. The focus there is on reliability type checks that allow users to do a very thorough but fast verification – to ensure you don’t get things like ESD, or latch-up or any of these other things. So you drive that towards 20nm node and add new features and capabilities, and as you find them, you discover that those technologies can be quickly backfilled for use at 28nm and even 40nm.
We’ve already had some technology we’ve been working on for the yield learning side of things at 20nm that we’ve been able to take to a customer and help them find a CMP issue on a 40nm process. I don’t think you could define maturity much more than 40nm.
So, what comes out of the work you do for 20nm is – whether the customer is actually working toward that node – that you can take these technologies back to the previous generations and then generally, we’re able to give people better control and better reliability on their technology. That’s one part of the picture that gets overlooked sometimes.