Reset optimization pays big dividends before simulation
Reset optimization is another one of those design issues that has leapt in complexity and importance as we have moved to ever more complex system-on-chips. Like clock domain crossing, it is one that we need to resolve to the greatest degree possible before entering simulation.
The traditional approach to resets might have been to route to every flop. Back in the day, you might have been done this even though it has always entailed a large overhead in routing. That would help avoid X ‘unknown’ states arising during simulation for every memory location that was not reinitialized at restart. It was a hedge against optimistic behavior by simulation that could hide bugs.
Our objectives today, though, include not only conserving routing resources but also capturing problems as we bring up RTL for simulation to avoid unfeasible run times there at both RTL and – worse still – the gate level.
There is then one other important factor for reset optimization: its close connection to power optimization.
Matching power and performance increasingly involves the use of retention cells. These retain the state of elements of the design even if appears to be powered off: in fact, to allow for a faster restart bring-up these must continue to consume static power even when the SoC is ‘at rest’. So, controlling the use of retention cells cuts power consumption and extends battery life.
Reset the ‘endless’ threat
Resolving such complex issues based purely on simulations will no longer work. It will put you on the path toward so-called ‘endless verification’.
A thorough and intelligent pre-simulation analysis of your reset scheme can now point both to the best reset routing and the minimum number of expensive retention cells you need to implement.
At the pre-simulation stage, tools like Ascent XV from my company Real Intent, can undertake a pretty smart heuristic analysis of the dependency of one flop’s reset on another and the relationships between different blocks. They will then produce a report with further insights and characterization, based on formal and structural techniques, that go some way beyond just ‘a best guess’.
The objective is to inform the designer on either the specifics or the flavor of the potential problems in the design. He can then review this report – which ideally should offer some alternatives itself – and undertake reset and related power optimization before moving into full simulation.
Orders of magnitude do apply
The time-savings available are significant. Unresolved reset issues lead, of course, to X states, uncertainties post-simulation that will take considerable time to address. The familiar ‘Rule of 10’ applies: catch a problem earlier and it is a 10X easier fix.
Beyond that, pre-simulation techniques are becoming more powerful with each generation. Our latest release of Ascent XV has enhanced algorithms that in themselves offer a 10X improvement in run-time against the previous generation.
Preparing your code carefully for simulation has a direct benefit at the bottom line by leveraging increasingly mature strategies. Can you afford not to consider them within your flow?