Solido sets up lab to drive machine-learning adoption

By Chris Edwards |  No Comments  |  Posted: April 6, 2017
Topics/Categories: Blog - EDA  |  Tags: , , , , , ,

Solido Design Automation aims to bring the types of machine-learning techniques the company has used for its physical-analysis tools to a wider range of EDA tools through the launch of its ML Labs initiative.

Amit Gupta, CEO of Solido, said: "We have been focused on machine learning for variation-aware design for the past seven years and we are expanding that out into other areas. The first area is characterization. But we are getting demand from customers to expand into other areas within EDA."

Although deep learning has been the focus for many companies working on machine learning, Solido has assembled a toolkit based on a wider range of techniques such as Gaussian processes, deep learning, and genetic algorithms that are deployed depending on how they perform on a specific application.

"The way we approach these problems is to look at what are the right technologies for the problem at hand. We put together a whole suite of tools that weren't part of Variation Designer. We look at what are the right regressors and classifiers that make sense for a particular problem and then develop proprietary extensions.

"What really differentiates our machine-learning technology is that this is machine learning for engineering problem where the cost of errors is high," Gupta claimed. "The result can't just be an estimation. It must be production accurate and verifiable. We adaptively build models that are self-verifying. The algorithms are designed to capture high-order interactions with non-linearities and discontinuities."

Collaborative exploration

The idea behind ML Labs is to work with customers to develop products that can employ machine-learning techniques to reduce runtimes. For example, the variation analysis tools are designed to slash the number of simulations required to see the effects of PVT variation. "It can come down from a hundred thousand to a thousand," Gupta said. "But you are still getting the accuracy of brute-force solutions."

The characterization tool began as part of the collaborative process that Solido is now formalizing and promoting through ML Labs, Gupta explained. "A customer came to us – they are users of Variation Designer – and asked us can we solve challenges that they are having in the library characterization domain."

The results were two tools: Predictor and Statistical Characterizer.

"Characterization can take days or weeks and is often on the critical path for semiconductor companies," Gupta says.

Predictor creates Liberty models of libraries at new conditions from existing libraries that capture behavior at different corners. The problem that has arisen in recent years is that the impacts of voltage and temperature changes among others are often non-linear at advanced nodes. The machine-learning techniques analyze how the variables interact in the existing libraries and extract the most important connections so that the tool can build better-targeted simulations.

The Statistical Characterizer performs a similar job for statistical-timing libraries for LVF, AOCV, and POCV flows. "It handles non-Gaussian distributions, which is very important for advanced and low-power nodes. It adaptively selects simulations under the hood to meet the accuracy requirements of the user."

Machine-learning benchmarking

To develop further tools, Solido has built a process it has formalized under the ML Labs banner. The first part of the process is to determine which machine-learning techniques will work best, and they may not fit pre-conceived ideas.

"Customers come to us and immediately get to talking about deep learning or another regression technique. What we quickly do is get into the data. We have a benchmarking portfolio set up that helps the experts figure out what the technology and product fit is," Gupta explained.

In terms of potential applications, they could expand beyond the physical modeling that has been Solido's core focus so far. "Right now we are pretty open. We are talking to various customers about their problems. We see if there is a reduction in the time to get a solution versus brute-force techniques. It's a very quick process to figure out whether machine learning provides benefits or not. If it does, we start an alpha, beta productization effort," Gupta noted.

Gupta said the emphasis on benchmarking is to work on large data sets to see if the potential speedups are real rather than take in small sets of sample data and then find the machine-learning algorithm plateaus quickly. "We are not taking problems we are making up with just ten input variables and then try to scale up to ten thousand input variables. We try to figure out where are the gotchas that come up."

Leave a Comment


Synopsys Cadence Design Systems Mentor - A Siemens Business
View All Sponsors