3D integration technologies will blend says TSMC chief scientist
Design-technology co-optimization (DTCO) and 3D integration will dominate scaling in the coming decade, TSMC chief scientist Philip Wong claimed in his keynote at DAC on Monday. The result will be that monolithic integration and integration through in-package stacks will be indistinguishable.
“Advances so far have been driven by 2D scaling,” said Wong, who is also the Willard R and Inez Kerr professor in the school of engineering at Stanford University. “Whether it’s number of transistors, gate density, SRAM density, the density is increasing exponentially even today…Those who claim transistor density is not improving, I suggest you look at the data.
“This does not mean low-level metrics have been shrinking at the same rate. For example, one over contacted poly pitch has been growing at only 1.69x on a two-year cadence. What has happened is that DTCO has accounted for the density improvements seen over the past decade, which has not been captured by the low-level metrics. We believe this transistor trend will continue [in combination with other factors]. Some improvements will come from 2D, some from 3D integration, some from DTCO, and some from larger dies,” Wong said.
“If you look back at history, advances in scaling have never been sustained using just one method. In the beginning there was Dennard scaling,” Wong said, noting this ended in the early 2000s. “The semiconductor industry has continued to progress but not in the way Dennard scaling prescribed.”
Dennard and beyond
Dennard scaling gave way to techniques such as strain engineering and the introduction of more complex material stacks in gates and channels, followed by the change to finFETs in order to deal with short-channel effects. “Now it is DTCO. When one method begins to saturate, there are knobs you can turn that are waiting in the wings.”
The DTCO optimizations used by manufacturers have gone into techniques such as reducing standard-cell track height helped by tactical techniques such as fin-count reductions, the single diffusion break in which neighboring cells share dummy gates and placing contacts over active gates instead of allocating landing sites close to the gate.
Wong and colleagues published a proposal for better long-term density metrics in the Proceedings of the IEEE in April. They proposed three metrics: logic density (DL), memory density (DM), and interconnect density (DC). Whereas the first two are dominated by on-silicon density, interconnect primarily captures the density of I/O between chips in a package rather that the density of the on-chip metal lattice.
A key problem facing the industry is the mismatch between logic and memory scaling, but that this indicates the industry will change to make up the difference. “In an earlier paper, Dave Patterson wrote that when everything improves at the same rate nothing changes. Imbalance on the other hand requires innovation,” Wong claimed. “The most effective knob here is to increase connections. This needs innovations in compute-memory integration.”
3D stack options
Wong pointed to possible architectures that interleave multiple levels of logic and memory on top of each other, using a variety of techniques. One issue is the density of interconnect available using microbumps in conventional stacks. “Much finer integration is available through monolithic 3D. I believe this is what you will see in the future. There are three to four orders of magnitude of improvement in interconnect to be gained gone from today’s technologies to monolithic 3D integration.
“The use of microbumps versus denser interconnect will be increasingly blurred. The techniques will smoothly blend into each other: monolithic integration will be a continuum of 3D packaging technologies. Packaging and wafer-level integration will increasingly blend into each other.”
The shift to 3D integration will put pressure on design tools, Wong said. “Tools that handle partitioning will be indispensable, not just for partitioning within each die but across dies,” Wong said.
He concluded that a potential roadblock is the pricing of hardware design out of the reach of most companies. “It is extremely important to have an ecosystem that fosters innovation. Chip design today is only affordable to a small number of companies and relatively small groups of engineers. That is very different to software, which often comes from a much broader ecosystem. If we could lower the barriers [to chip design] an enormous renaissance will be unleashed. It should be as easy to innovate in hardware as it is to write a piece of software code.
“What if one day a high-school student could design chips and put them on a board as easily as programming software. I believe this is a dream that will become true one day.”
Leave a Comment
You must be logged in to post a comment.