At the recent Design Automation Conference, back in San Francisco with a hybrid format, AMD senior vice president Sam Naffziger provided more insights into the chipmaker’s use of chiplet-based design and manufacture.
“We started this architectural direction back in 2014,” he explained, adding that the primary motive then was to improve yield compared to trying to make very large monolithic SoCs. Since then the reasons for going down the chiplet path have expanded, such as the ability to mix and match product options more readily and not just in terms of different features and core counts. “We can cherrypick faster chiplets, if we have the right test and binning analysis, for premium products.”
Naffziger said the increasing focus on energy consumption as well as the area cost of silicon with further scaling places a much greater emphasis on the use of accelerators that are more efficient for specific types of workload. “We are specializing our architectures because that’s the only way to meet the compute demand. We need to be able to design subcomponents that can be mixed in different subcomponents.”
The use of more specialized architectures, however, demands changes to the approach used for integration. Up to now, chipmakers like AMD have been able to use for the most part fan-out packaging, which is relatively simple and inexpensive to employ. In fact, fan-out can have advantages over the use of interposers as, although it needs higher-power drivers and serial interfaces, it can handle the relatively long interconnects that are typical of multicore processor and tiered-memory complexes.
The problem for accelerators lies in latency and the energy of data transfers between CPU, memory and execution units, which points to parallel buses operating at as few picojoules per bit as possible. “We need to be able to support thousands of signals between the die: that’s why 3D integration is so compelling,” Naffziger said. “Finding an optimal solution is an iterative process. As we move from less-dense packaging options, the costs go up. 3D requires a lot more processing steps. By nature, 3D is more expensive but the efficiency goes up. So we need to deliver more value than the cost overheads.”
AMD is not going straight to vertical integration. The iterative design process mentioned by Naffziger has resulted in a hybrid of fan-out and interposer integration techniques. Used on the Instinct product line released in the autumn, this is the elevated fan-out approach. It is similar to the EMIB technique used by Intel, particularly by its programmable-logic division. The elevated part refers to the way that AMD avoids having to dig a cavity into the substrate to place the mini-interposer that carries the interconnect between adjacent die. Instead, the chips being connected sit on taller copper pillars. This improves line tolerances, which should result in higher interconnect counts.
For memory, AMD is using hybrid bonding. This has been used to double the amount of cache SRAM that can be stacked in one location. “Had we put that into a planar design, we wouldn’t get those benefits it would be twice as big,” Naffziger.
Such hybrid bonding could make its way into logic stacking but there are other issues that have to be solved first. “We would like to see innovations around thermals. The stacking approach is going to increase power density. For the VCache design, we stacked cache on top of cache; we didn’t stack cache on top of a hot CPU. There is currently dummy silicon on top of the CPU,” Naffziger explained. “Power delivery is also a challenge.”
“We have a got a lot of engineering challenges,” he added, pointing to test and determining whether unpackaged chiplets are functional before assembly as being another key problem. Some of that is handled by using a small number of test pads dedicated to test at wafer sort, but any additional pads are always expensive and unpopular. Similarly, 3D increases the subtleties of cost and design. “There are things that just couldn’t be done at 2D that we can do at 3D. But it comes with a cost.”