Cadence uses reinforcement learning to tune flow
Cadence Design Systems has launched a tool that the company claims can speed up implementation by applying machine learning across the flow.
The Cerebrus “intelligent chip explorer” uses machine learning to gauge how well different directives and constraints will work on a target design and automatically select for those that deliver better power, performance and area (PPA) characteristics. Similar to Google’s recently disclosed MLplacer, the tool employs reinforcement learning to determine how well input parameters translate into implementation performance. However, Rod Metcalfe, product management group director at Cadence, said the tool has a wider remit than the Google experiment, which was focused on optimizing hard macro placement. This tool controls the inputs into existing Cadence implementation engines, some of which use their own AI-based optimizers as well as numerical algorithms.
“This is a very different approach. We are using reinforcement and have been doing a lot of research on it, but this is for the full flow, from RTL to GDS,” Metcalfe said. “We use the engine to tune the flow, not focusing on one part of it. The aim is to get to better PPA quicker than with a manual approach.”
In principle, the tool will let teams get to an efficient result more quickly, relieving the implementation engineers from making the detailed decisions needed for a manual flow. In one customer’s trial of automated floor planning exploration, the number of paths that failed timing was cut by more than 80 per cent and leakage power fell by 17 per cent.
Time and power reductions
“We’ve observed more than an 8 per cent power reduction on some of our most critical blocks in just a few days versus many months of manual effort. In addition, we are using Cerebrus for automated floorplan power distribution network sizing, which has resulted in more than 50% better final design timing,” said Sangyum Kim, vice president of design technology at Samsung Foundry.
Metcalfe said the development team had to tackle similar issues with reinforcement learning as those the Google engineers said they faced with MLplacer. A key issue with EDA is that it takes time for the benefits of a particular strategy to become evident, so the “reward” allocated to the AI model has to be delayed. Metcalfe said in the strategies employed by the Cerebrus designers, the reward calculation does not have to wait until the GDS result is ready. “You don’t want to make the decision too early but we found you don’t have go through the whole flow for everything. If you ran the full flow and only calculated the reward at the end, that’s very wasteful. We’ve been able to apply learning as the flow progresses. If something is not showing good behavior, you can terminate that job,” he said, adding that the engine supports incremental methods so that it can use the results from early stages and then train on variations from a midpoint.
The typical usage model would be for a customer to start by training Cerebrus on a target design to give it enough data on which to work. As Google found, the results on a trained model do not typically translate to designs that have different characteristics but by using a cluster of CPUs to work on representative blocks and chip designs upfront, the company expects customers will be able to cut implementation time significantly when the later designs are ready to go.
“Generally, what we see people doing is that they take their existing flow and give that to Cerebrus and then see if it comes up with better metrics. Customers build confidence that way. The next project is the one that uses it,” Metcalfe said. “The customer trains it from scratch on their own blocks and their own classes of design. And they own that model: it’s part of their design IP.”