Joe Sawicki, executive vice president of IC EDA for Mentor, a Siemens business, is confident about the near-term future of the chip industry, but it is one that will see significant changes thanks to the growing demand for cyber-physical systems and the design styles they require.
“We are believers,” Sawicki stated in an interview at this year’s Design Automation Conference (DAC) in Las Vegas. “The people who say Moore’s Law won’t scale? They are wrong. And wrong for the next six years. There’s a slew of new nodes coming up.”
That does not mean there will not be big changes to the way tools are implemented in order to make the next few generations of SoC possible. One of the main trends at the moment is the incorporation of machine learning into mainstream design tools, something that Mentor has pursued already through several avenues so far, including the acquisition of Solido Design in 2017. Sawicki says R&D is pursuing the application of both supervised and unsupervised learning techniques to test data to help improve the ramp rate of silicon as it goes to production. “The Solido stuff is the most mature example of machine learning in use to do things like variation analysis,” he notes, but ultimately the techniques will likely spread across all the product groups.
Machine learning expands
“With AI, we see full integration across the stack and at levels above what you see today. We have projects that are employing machine learning as the underpinning for a fundamental new algorithms in EDA. I think in a few years the question of what are you doing with machine learning will be like asking: what are you doing with C++?”
The second key trend is that of using cloud computing to accelerate key parts of the design flow and, potentially in the future, the entire flow. The first usage model is that of tactical deployment on areas where compute capability is needed to prevent bottlenecks forming, such as mask production and design-rules checks.
“The first move to cloud is for burst usage as you come to the end of the design cycle. But it’s clear that if you look at the trend line, six years out cloud concepts become how you put IT together,” Sawicki says.
In these early stages, the main objective is to [launch a lot of nodes, like Calibre. Fixed allocation.] But in the wider IT world, the companies offering services online have shown that they can make use of dynamic orchestration tools to streamline their operations. The first wave of cloud EDA tools do not, for the most part take advantage of this level of orchestration. In DRC, for example, the project deadlines are such that it make sense to have flexible scaling but not such that node counts need to scale rapidly during a run.
However, the usage model for optical proximity correction (OPC) in mask manufacture does hint at more dynamic orchestration. “In the OPC space we deployed five or six years ago a technology that does that, because OPC can take so long without a lot of compute resource. You may only allocate 2000 nodes to run it. But, then someone says ’I’ve got a hotter lot’. At that point, you determine you need 3000 to handle the workload. In that environment you can dynamically tune those things and someone has a job to manage that. But it’s not used on DRC because that’s not the model, where the main aim is to grab as many nodes as they can without people killing them.”
To the cloud
Sawicki says the shift towards cloud has begun in earnest, though it has taken some time to gain acceptance among users. “This stuff took up a lot of space in the buzz-sphere,” he notes. “We started getting vocal with customers about two years ago. We asked what do you want to do? And we had our first white paper on how to run Calibre on a public infrastructure about two years ago. I don’t think any got run for about a year.” But projects such as Calibre in the cloud have demonstrated to users that there are gains to be made.
The bigger picture is the gradual integration of concept such as the digital twin that are now coming from the world of system-level design, a process that is being helped by the acquisition by Siemens PLM. “The Siemens acquisition has been able to help us to ramp up on both acquisitions and internal development,” Sawicki says.
But the integration is operating at a more fundamental level with groups from both sides interacting to find out how they can operate together in order to build virtual products at scale. “Organisationally, we’ve done ‘speed dates’. People say to each other: ‘What have you got? We’ve got this’,” Sawicki explains. After those dates, the teams work together to pull the SoC design parts of the flow into environments such as Siemens’ PAVE360.
Virtual prototype meets digital twin
Today’s main target for digital twin work that involves SoC design is in automotive. Sawicki points to the need identified by automotive OEMs such as Toyota to have prototype autonomous vehicles drive for more than 8 billion miles in order to iron out any safety deficiencies. The practical way to achieve those miles is not on full physical prototypes. That will take too long.
The answer is to complete as many miles as possible in the virtual space. In order to run at a reasonable speed, that calls for emulation in combination with server or cloud-based simulation. “You run real data on the software stack and virtual hardware on virtualised infrastructure,” he says. ”Emphatically, we are big believers in digital twin with emulation for simulating large electronics systems.”
Conceptually, the approach is not dissimilar to that employed in SoC design and Sawicki wryly points out that though virtual prototyping the semiconductor world had the digital twin concept in action well before the integration with the systems employed in mechanical design. “It’s about pulling those two twins together.”
Sawicki adds: ”We are also looking at requirements management and traceability down through the IC, which involves pulling things from tools like Polarion. And we are working to make that integration more efficient. For example, the first thing we did with that was a simple API. But then we found performance would be better with modularization. Sometimes you get customer feedback where they say ‘it’s almost enough’ and we do further work on the integration.”
As that integration continues, it will help drive the transistor-count growth that Moore’s Law can sustain into the next decade.