When the move to high-k, metal-gate processes was first mooted, a lot of the discussion was about cost. Gate-first proponents argued that the gate-last process – which Intel went with at 45nm – would increase cost although it simplified some of the materials choices. Modelling by Gold Standard Simulations indicate that gate-last – which seems to be the way that the industry is now headed – is the sensible choice from a design point of view, at least for bulk processes.
The issue with gate-first is to find a way of maintaining a high-quality gate electrode through all the following process steps that define the rest of the transistor. Gate-last avoids the problem by making a dummy gate, making the rest of the transistor and then sucking out the dummy material and replacing it with the true gate materials, using different material mixes for the NMOS and PMOS electrodes. Clearly, that adds to the number of process steps.
At 20nm, variability and its impact not only on the minimum-sized SRAM transistors but standard logic transistors signal big problems for gate-first processes, according to GSS. The company, founded by Professor Asen Asenov of the University of Glasgow and which gave us the current density simulations of Intel’s finFET structure, used its Garand software to model 10,000 bulk-CMOS MOSFETs, each with a subtly different, randomly generated dopant profile.
As expected for such small devices, variability has a big impact on SRAM behavior, which is proving to be a brake on scaling, although library designers are attempting to work around the problem using circuit-level assists.
But the simulations revealed “another sinister effect”, according to the GSS blog. This one is independent of transistor width, which is not good news for logic or analog devices. “The variability results in approximately one order of magnitude increase in the average leakage current of a single transistor or in the overall leakage current of the chip compared to ideal ‘uniform’ transistors without variability,” GSS argues.
The main source of the problem is the metal-gate granularity in gate-first processes – an effect that should not be seen to any significant degree in gate-last processes. “The elimination of MGG as a source of statistical variability in the 20nm metal-gate-last technology results in approximately a 10 mV reduction in the threshold voltage standard deviation,” says GSS. “This seemingly small reduction in the statistical variability has a dramatic impact, resulting in 100mV reduction in the SRAM Vcmin and a 50 per cent reduction in leakage.”
Graphs and a more detailed discussion are at the GSS blog.
Blog edited since publication: text added to clarify that simulations were for bulk CMOS processes.