EDA with ‘AI inside’ – Mentor’s Joe Sawicki offers an insider’s view

By Paul Dempsey |  No Comments  |  Posted: September 3, 2019
Topics/Categories: Commentary, Design to Silicon, DFM, Digital/analog implementation, Blog - EDA, - ESL/SystemC, HLS, Next Generation Design, Tool development  |  Tags: , , , , , , , ,  | Organizations:

One important trend in EDA tools – and across those that leverage broader system design objectives such as the digital twin – is the incorporation of more artificial intelligence (AI) and machine learning (ML) within the software itself, so that customers can deliver their own AI-based projects.

As this incoming EDA generation matures, how these ‘AI inside’ tools are likely to develop is still not always clear to their users. For obvious reasons, much of the emphasis in communication so far has been on what new or evolved software can do rather than how it does it.

During a press meeting in Seoul last week, Joseph Sawicki, Executive Vice President for IC EDA at Mentor, discussed some of the strategies that inform how his company is incorporating AI and ML in its products.

There are four overarching ‘AI inside’ elements, in addition to technology integrations, such as adding and ensuring compatibility with leading design environments like TensorFlow. In very broad terms, these are.

  1. Better management of the computational resources needed to realize designs as design complexity increases and process nodes continue to shrink.
  2. Harvesting non-project-specific/process-specific data within the tool to enable its continuous enhancement.
  3. Providing tools within which customers can securely use internal and highly sensitive training data without exposing it to others, including the EDA vendor.
  4. Delivering extended support that allows customers to make the best use of AI and ML features within their tools.

Computational resources

One persistent concern is that ML inevitably means a steep ramp in computational resources and thereby costs. That may sometimes be true. It is one of the main reasons why design companies are looking to secure large-scale cloud access for when their existing internal server capacity is stretched beyond its limit.

But Sawicki said that ML is already being used to constrain increases where the tool being run by the customer is already leveraging AI. He cited the specific instance of recent enhancements to Mentor’s Calibre platform for optical proximity correction (OPC).

“Because Calibre is operating on the physical design database, each run of the chip produces billions and billions of data points that are available for analysis. This is the type of data we are taking advantage of. By putting an AI platform into Calibre, we can collect data around those chips and then use that to do new deliveries of value to our customers,” he said.

“So, at 7nm for a critical layer, customers are using up to 8,000 CPUs running for 12 to 24 hours to do this [OPC] work. By using machine learning, we have been able to drop that by a factor of three and constrain the increase in time that would be necessary to produce each of the advanced nodes coming in the future.”

The implication is that ‘AI inside’ can ideally lower the resources needed or at least pre-empt the scale of any expansion by progressively adding greater intelligence to a tool. So, even if the resource does need to go up, it should not be by as much as some users fear.

Data harvesting

Data enables ML. The argument goes that the more you have, the better your results. But there are limits – as is noted below – as to how much data various players in a supply chain will be willing to share (an issue that has been part of EDA since well before the ‘AI inside’ era).

“But,” Sawicki said, “there’s a class of tools where what you’re looking at is not customer data, but the data about the tool itself.” This is particularly germane in instances where that data is clearly not customer-specific. Here, Sawicki cited the example of place-and-route.

“If you’ve ever talked to someone about place-and-route, how well it performs is highly sensitive to how you set up the tool,” he said. “We have found that if we can take and train our system over time based on what the output result is and how it changes with those changes in setup. We can then put in place systems they can automatically set up the tool to optimize performance for the design.”

Customer training data

“When it comes to data about the design itself, or the process itself, the chance that we’re going to get, say, TSMC and Samsung to share training data, no matter how beneficial that might be to their process, is zero,” Sawicki acknowledged.

Yet at the same time, EDA can still provided added value by developing tools that still allow users to leverage their own secret sauce within the tool’s context.

“Because the designs are so large and so complex, they do, in and of themselves, become a rich source of training data. And so we look for applications where there is sufficient data that can be owned by the customer to give them tool leverage. When you’re looking for these applications, there has to be sufficient data that can be owned by the customer to still generate the value,” he said.

The message here? Some tradeoff between the proprietary, demands for security and tool capability will always be with us, but, as long as it is acknowledged and well managed within a flow, there will be scope for significantly greater efficiencies.

Extended support

In many cases, the fruits of AI and ML within a tool should and will be invisible to the user. The GUI will be consistent, as will reporting. In others though, new methodology options will be opened up (for example, the use of high-level synthesis for integrated software-hardware-algorithm optimization) or there will be the issue of inserting and leveraging closely-held internal training data.

Inevitably, additional support will be required in some cases. Here, ‘training’ as a term stretches across both the user and what the tool does.

“If you look at virtually all of these applications, the determinant of success is how well the system is trained,” Sawicki said.

“As we do product development now, one key factor for us, if we are going to be giving our customers success, is that we need to manage the process of how they do training. It includes things like best practice. It includes things like automatic data generation qualification and cleaning. And it includes ensuring that the training process is a well managed and defined activity to set up these types of flow.”

Mentor’s ‘AI inside’ progress

In his presentation, Sawicki concentrated on AI-informed tools that Mentor has already launched. For example, Calibre Machine Learning OPC (mlOPC) was launched in May The integration of advanced place-and-route techniques across the Oasys-RTL, Nitro-SoC and Calibre InRoute sequence is well established.

But Sawicki added that Mentor now has “40 to 50 products that have had some application of AI.” Most of these products are still to be announced, so expect further innovations within this type of framework to become public soon.

Comments are closed.

PLATINUM SPONSORS

Synopsys Cadence Design Systems Siemens EDA
View All Sponsors