Getting ready for 20nm

By Antun Domic |  2 Comments  |  Posted: September 6, 2012
Topics/Categories: Design to Silicon, Blog - EDA, - Industry Blogs  |  Tags: , , ,

The 20nm process node is upon us, and I see three key challenges with the shift to this geometry. The first is the sheer complexity of dealing with these huge designs, which affects the whole flow, from place and route through analysis to verification. This is causing concerns about the size of the teams necessary to tackle designs, and the capacity of the tools to handle all this complexity in a timely fashion.

The second issue is one of physics: it is not possible to print features at the 64nm spacing of first-level metal on a 20nm process using a single mask and current lithography systems. Instead, you have to split such narrowly spaced features onto two masks whose patterns are interleaved. This approach extends existing lithographic techniques, but brings new challenges, such as how to split patterns on to two masks (the coloring issue), how to place and route such designs, and new intricacies in analysis.

The third issue is economic. Double patterning will take extra mask layers, so the cost of designing a 20nm chip will be much greater than for previous process generations. The only way that chip designers will recover these costs is to sell very large volumes. This is driving competition, especially in the wireless market where the company that produces the fastest processor core wins the design. That, in turn, means a growing emphasis on higher clock rates and the quality of design necessary to achieve them.

Is the transition to 20nm so different to previous node transitions, in terms of product and technology development?

Well, every new node brings the ability to integrate more functionality on a chip, which we are used to, but one 20nm design I heard of will use more than 20 billion transistors, which is a huge number, even if some of them are in the memory arrays.

What is different at this node is the extreme detail involved in applying the technology. You have to design cell libraries to be compatible with double patterning, which creates a lot of work in the infrastructure of the flow. It’s forcing us, for example, to rethink routing so that we can produce colorable layouts. There are also new issues in sign-off, because mask misalignments may introduce new process variations that have to be accounted for through modeling, extraction and in timing analysis.

In terms of design intent, the emphasis on clock frequencies at 20nm is not so different from previous nodes, which were always taken up because they offered a functional advantage such as higher clock speeds, lower power or less area. At this node, though, especially in high-end portable applications, the key trade-off is to achieve the highest operating frequency for a given power budget.

What can tool vendors do to help tackle these issues? We’ve been supporting the development of 20nm processes from the beginning, thanks to our TCAD tools, which foundries use to analyze the basic device structures, and our lithography tools, which explore the printability issues of double patterning.

We’re also extending the innovations that we developed for 40nm and 28nm processes to 20nm, particularly the ability to link our place and route to layout verification. This becomes even more important at 20nm because although we try to produce layouts that are ‘correct by construction’, splitting one layer on to two masks for double patterning makes this more difficult. For example, you could split your design in a way that meets the coloring rules locally, but which violates them at the whole-chip level. You can either fix this by asking your router to inspect very large areas for coloring violations as it goes, which would increase run times or restrict your layout rules so much that it cuts into utilization. The other approach we are taking is to combine design implementation with in-design verification. Our DRC/LVS tools have been developed to handle large numbers of polygons, to check for such ‘large cycle’ violations. By using this approach for physical verification we are able to optimize the design flow to minimize throughput and meet the design objectives.

So how ready are we to provide the physical design, verification and sign-off flows necessary to use 20nm processes? Our tools have been used for quite a lot of 20nm tape-outs since we had our first success in May 2011. As the design rule manuals (DRMs) have evolved we have been updating our tools to support the latest versions from all the leading foundries and are ready to engage with our customers.

Author

Dr Antun Domic is senior vice president and general manager of the implementation group at Synopsys, Inc. He is in charge of the product line known as the Galaxy™ Design Platform, including logic synthesis, place and route, timing, and test and layout verification tools. Before joining Synopsys in 1997, Antun worked at Cadence Design; at the microprocessor group of Digital Equipment during the design of several Alpha and VAX chips; and at MIT Lincoln Laboratories in Massachusetts. Antun holds a BS from the University of Chile in Santiago, and a PhD in mathematics from the Massachusetts Institute of Technology.

Links

Find out more about Synopsys’s 20nm offering here

Synopsys

700 East Middlefield Road
Mountain View, CA 94043
Phone: (650) 584-5000 or
(800) 541-7737
 
 

2 Responses to Getting ready for 20nm

  1. Pingback: Event alert: TSMC Open Innovation Platform | Tech Design Forums

  2. Pingback: Proving the 20nm ecosystem with the ARM Mali GPU

Add Comment Register



Leave a Comment

PLATINUM SPONSORS

Mentor Graphics GLOBALFOUNDRIES Synopsys Samsung Semiconductor Cadence Design Systems
View All Sponsors