This one now goes to 12

By Paul Dempsey |  No Comments  |  Posted: June 3, 2011
Topics/Categories: EDA - IC Implementation  |  Tags:  | Organizations:

We quiz TSMC’s Tom Quan on the latest methodological challenges being addressed by the world’s largest foundry’s signature Reference Flow.

As the world’s largest foundry, TSMC sets a great deal of store by its Reference Flow design portfolio, a crucial part of its broader Open Innovation Platform. The key to Reference Flow, which first appeared in 2001, is that it gives the company’s customers access to best-in-class tools from the widest variety of vendors and within an integrated format, aimed at the company’s most advanced manufacturing processes.

However, the benefits do not end there. Through to Reference Flow 12, which makes its debut just before this year’s Design Automation Conference, TSMC has consistently added new capabilities.

The eleventh generation of the portfolio addressed 28nm design challenges through new design enablement innovations that improved chip power, performance, and design for manufacturing (DFM) to lower design obstacles, reduce power consumption, improve design margins and maximize yield.

Specifically, it expanded into such areas as electronic system level (ESL) design, SoC interconnect fabric, and 2.5-D/3-D integrated circuits using silicon interposer and through-silicon-via (TSV) technology.

In addition, TSMC last year launched its first dedicated AMS [analog and mixed-signal] Design Flow, and this will be released in its 2.0 generation also at DAC this month.

This layout-dependent effect (LDE) aware AMS methodology features a TSMC-specific LDE engine, a complete DFM-aware analog layout guideline and checker utility, advanced analog base cell design, and a comprehensive design configuration management environment, integrated on top of a 28nm interoperable process design kit and the OpenAccess database.

The advanced AMS design flow has a front-end design and simulation platform for the analysis of design sensitivity, yield, multi process corners, noise effects, IR drop and electromigration issues.

The AMS physical flow features constraint-driven analog place and route technology for fast layout prototyping, semi-automatic rule-driven layout assistance, and a demonstration of a PLL system design budgeting and loop filter layout synthesis capability.

The AMS physical verification flow includes an accurate 3D field solver based extraction with intelligent RC reduction, and full DRC/LVS sign-off and dummy pattern insertion and extraction.

In short, Reference Flow and the AMS Design Flow both represent the need today for ever more complex design and manufacturing challenges to be addressed within broad-based, fully featured ecosystems, developed through partnerships between foundries, EDA vendors, IP vendors and back-end packaging providers. Nobody can go it alone in the deep sub-micron era.

One of the best-known public faces of TSMC’s Reference Flow efforts—and indeed many of its other related activities in the US—is Tom Quan. Tom’s official title is deputy director North America, Design Methodology and Services Marketing. However, he is perhaps better regarded as the ‘go-to guy’ when it comes to planning a design project with the foundry.

Tom Quan
Tom Quan

On the eve of Reference Flow 12’s launch, Tech Design Forum talked to Tom about the evolution of the service and got some pointers as to what to expect in practical terms from the new release. Space constraints here limit this article’s review of the discussion to three areas: ESL, 3D IC, and DFM. However, a much longer version of the discussion can also be found online (www.techdesignforums.com), where he also covers other areas such as power, timing and AMS.

Tech Design Forum: You added ESL for the first time in Reference Flow 11, but even now perhaps there are some design managers who are wondering how exactly that dovetails with the services a foundry provides. To many people, ESL is something they associated with a very high level definition of a project’s goals and specification, issues that perhaps don’t immediately touch on manufacturing.

Tom Quan: The thinking behind adding ESL is that if you look at the cost of product development, a major part of it is incurred during system-level design verification and hardware/software co-design. You can say that the physical implementation of IP represents a smaller part

Then, what we discovered was that when you looked at the two stages of, first, system-level design and, then, implementation into the actual chip, there wasn’t actually a lot of connection between them; they were actually quite disconnected. There’s not a lot of predictability as to how a system will perform once you actually implement it.

So that was the challenge for customers and ultimately for us: to look into how we could extend Reference Flow, which has traditionally focused on RTL to GDS implementation, into the system level.

TDF: So what are the kinds of solutions you have been able to bring to that?

TQ: One of the key components has been using the TSMC PPA Model—that’s PPA for ‘power, performance, area’. We’ve worked on how different pieces of the system can be provided with a PPA model, so that when the system-level verification is being run, the process technology-based PPA elements can be analyzed to give the user a good idea at an early stage of how the system will perform before physical implementation.

Up until now there had been no way of bringing information from the process side or the library/IP side up to the system level without slowing down the system verification process. So, the idea is that TSMC makes it possible to capture critical figure of merit data from the process or implementation side and bring it up to the system level without impacting the verification cycle.

As an example, one of the first areas that we have attacked is the power part of the model. Imagine you have a system block-level diagram where you have a CPU, a programmable interface controller, a memory subsystem and an I/O subsystem. For the CPU, there’s a CPU power model in PPA. During the system verification a TSMC PPA engine reads in the information from the process side, as well as libraries/IP,  but it also reads in the PPA power model for the CPU.

As the CPU and the system are running through the different transactions, the engine can compute the power usage of the CPU in different operating modes. It can then display or represent a power monitoring waveform of some sort so that the designer can see how the system power goes up and down for each of those modes. That’s crucial for a customer who wants visibility and control over the energy consumption of the whole system.

So it’s very much about bridging a gap, but doing so in an efficient and targeted way, and we’re continuing to build that capability into Reference Flow 12.

TDF: ESL is obviously getting a lot more traction these days as well, but another area that seems to be generating as much interest—and is perhaps one more traditionally associated with TSMC as a foundry—is 3D ICs, system-in-package and so on. That’s a technology where you are also expanding Reference Flow’s capabilities. What is driving the state-of-play further down that road from your perspective?

TQ: If you look at the 3D IC market trends in the last couple of years, there’s been a lot of discussion about how we can use 3D and related technologies to achieve high performance, lower power and now reduced form factors. That’s largely due to the importance of mobile devices in the overall market. But people generally want to implement this kind of technology in a way that gives them the optimal cost structure. And of course, time to market is still very important. So how do you integrate these goals?

I think the answer is now very clear. Today, there is an intermediate step. The 2.5D silicon interposer that fits into all the requirements: faster integration, shorter time-to-market and, for most projects, that optimal cost structure. In the longer term, you will then probably see a true 3D IC strategy.

If you look at a printed circuit board (PCB), you will have components on there like the processor, the memory—flash, DRAM—and other devices like RF power amps and passives. The first step is probably to have the CPU and all the logic applications on the same silicon interposer platform as the memory.

The memory actually can be stacked already—flash-on flash or DRAM-on-DRAM— and those can sit side-by-side with the CPU. Even from the perspective of a PCB substrate, putting those on the same interposer will reduce the form factor by quite a bit.

Over time, as we learn more and more about how to stack these things and model them correctly using through silicon vias (TSVs), then more full 3D will happen and we will be able to achieve the overall system performance without losing all that a single system-on-chip (SoC) can deliver.

TDF: It’s true though that TSVs are still seen as pretty challenging.

TQ: You do have to be sure about what you’re doing, but also you have to know how you are going to present the information to the designer.

One of the key things with TSVs is having the manufacturing capability. The next thing is the design enablement. So, the first thing you look at is, ‘How do I model this?’ In the technical domain that means looking at the timing, the resistance/capacitance and quite a few other aspects.

But it’s equally a question of ‘How do we make it so that it all works and feels like a metal-to-metal via in a 2D design?’ If you connect one layer of metal to another one, you need a via. So we need to treat the TSV from a designer’s perspective, just like a traditional via, so that it’s very easy to just add them into the whole process without incurring any additional complexity. Essentially, you want the place and route (P&R) tools that are already used to handle standard vias to handle TSVs as well.

There are a number of issues here. We are working on TSV modeling. And once you start that, you need to look at the timing linkage between the die through the TSV. Then there are things like getting multi-die floor plans that are P&R-aware between the die. And then how do you extract the TSV R/C. And part of the TSV must connect out to microbumps or C4 bumps. So, how do you merge all this together into a single signal net so that you can extract them all together?

Another dimension is thermal-mechanical stress.

A TSV is a pretty good size. It depends on the technology but there is a large real estate of nanometers involved there. So if you place one on the chip, where are the ‘keep-out’ zones and how close can the circuitry be? What are the mechanical stress effects the circuit will incur? Then for thermal effects, you have to remember that TSVs are a conducting element. You need to look into that and see what influence it has on how the heat dissipates inside the die.

And other elements that come in from an analysis viewpoint are the IR drop and electromigration analysis.

In Reference Flow 11 and now Reference Flow 12, you can see a number of areas where we are working with different partners to introduce some new analysis capabilities to address these areas.

TDF: How is that collaboration progressing?

TQ. Two or three years ago, when we started this discussion, it seemed like there was a lot of areas that needed to be improved, and some that need to be reworked. But as we’ve worked through the problem item-by-item—floorplanning, place and route, DRC/LVS, modeling and simulation, timing—it’s not been as bad as we feared.

Most of the challenges actually require incremental enhancements to the tools. They don’t require a wholesale rip-up and redo or anything like that. That’s a good thing. We know how to bring the manufacturability from our side and the tool capability from the vendors’. And as a result we’re not incurring major ecosystem investment. We can do it in steps that allow people to do 2.5D, move toward 3D but get benefits and meet their needs today.

TDF: Moving on to DFM. The 28nm node is here today—TSMC itself is fast closing on 100 tape-outs—so what’s happening to make that widely available?

You’re right, there’s been quite a lot happening in the last year. For 40nm and 28nm we are now completing the implementation and deployment of what we call the Unified DFM Engine.

What that means is that the DFM Design Kit that we ship to customers includes both the DFM manufacturing data and this unified engine inside. We provide an API where a tool from

Synopsys or Mentor [Graphics] or Cadence [Design Systems] can interact with this engine and the data included around it to be able to identify hotspots, and use that capability at both 28nm and 40nm. The engine ensures the accuracy of capturing hotspots across different DFM tools.

The second thing we are addressing this year is the turnaround time performance of the DFM. As things have gotten more and more complex at 40nm and then 28nm, running the DFM has inevitably begun to take longer. Proactively, we have been worked with our partners to speed this up.

To take a step back, for some time you have had two implementation schemes in the DFM space. One is based around a pattern-based or a rule-based approach and the other takes a model-based one.

The pattern-based approach is, essentially, that you have pre-defined patterns where you know what will and will not pass. If the design maps to those patterns, then hotspots can be quickly detected. You get a very quick turnaround time. But it is limited. If you know what patterns lead to hotspots—great. But if you don’t, if it’s not in the database, then you can potentially miss them. The model-based approach is actually much more thorough. It’s slower but it helps you detect those hotspots that are not in the database.

In Reference Flow 12, we have taken a new approach that aims to combine the two schemes, and has some sort of self-learning loop connecting the different implementations. So as the process matures and we learn more about hotspots, we move that knowledge into the pattern-based side so it speeds up continuously, but we’re also looking to identify parts of the design that should run through the model side. If you do that, you give the user a combination of cycle time reduction and ensured accuracy in detecting the DFM hotspots.

TDF: What is the current situation regarding low power?

Today, low power is a universal requirement. It’s something that we’re asked for by all our customers, whether they are in the mobile market, or in networking, or even in desktop and graphics chips. We’ve got the experience here. Each Reference Flow release since 8.0 and 65nm has addressed both low power and timing.

On the process side, we have dedicated low power options at the latest nodes. We have 40LP at 40nm. At 28nm, we have 28LP, and then a low power variant of 28HP called 28HPL.

The flows include a lot of new techniques and methods that help customers incorporate low power features in their designs. We’ve become increasingly familiar with back bias techniques, gate bias techniques, power islands, voltage islands, and on/off header switch and footer switch.

Some of these are straightforward to implement, but others do require designers to be involved in the detailed design of the low power circuitry to take full advantage.

What we and our partners have to do is provide the ecosystem and the infrastructure that allow our customers to draw on those techniques as appropriate and in the most efficient way.

TDF: And then you have timing?

Chips are getting very large and some signals have to travel a long way across from one side of a device to the other. So, there has been a lot of focus on timing delay and, in that context, variation is becoming increasingly important.

Signals traveling these long distances have to go over silicon real estate that has different thicknesses, so CMP is a major factor here. Where will the thickness of the metal interconnect affect the timing and where might there be variations due to systematic or random issues? You need to be able to detect and model these factors within the manufacturing process and put the necessary information into the timing analysis flow.

In the last two years, there’s been a lot of talk about OCV—on chip variance. It is all about providing numbers to model that variance within the die and from die to die.

TSMC will be presenting both Reference Flow 12.0 and its AMS Reference Flow 2.0 at DAC 2011 and you can read more about their new features at www.tsmc.com or http://bit.ly/lUtIGT.

Comments are closed.

PLATINUM SPONSORS

Synopsys Cadence Design Systems Siemens EDA
View All Sponsors