How can designers deal with the increasing complexity of the multi-million gate designs they are being asked to tackle these days? Speakers on a panel at the Design, Automation and Test in Europe conference in Dresden outlined three approaches: by moving to higher levels of abstraction; by making more subtle use of existing synthesis strategies; and by better organisation.
Doug Aitelli, CEO and president of C synthesis tools provider Calypto, argued that the impact of increasing design complexity is being felt in two ways: in the amount of software that now has to be verified alongside the hardware; and in the sheer number of gates that have to be designed. According to ITRS figures that Aitelli quoted, the number of gates that a designers can produce each year has been static, at about 200,000, since 2006.
IP is being pressed in to use to fill the design gap, but as Aitelli said, today’s IP is more like complex subsystems than the simple functional blocks of yesteryear “and they have to be integrated with the same tools”. He also argued that although IP blocks enable reuse, as blocks get retargeted for different applications, this creates a long tail of derivatives that all need to be maintained and which offer variable quality of results.
Unsurprisingly, Aitelli argues that designers should move to the more abstract SystemC language to express their ideas, and that IP blocks should be rewritten in SystemC so that they can be retargeted more effectively: “What we’re driving for is complete source-level, C level sign-off.”
To make this possible, Aitelli argues that SystemC will have to start including mechanisms for expressing synthesis intent, for example the way that loops should be unfolded, or bus signals aligned, so that the results of a C to RTL synthesis step match what gets implemented.
“The only way to address the complexity-of-design issue is to move the level at which people are working up to the C level,” he said. “We also need to get more robust code coverage tools at the C level. If the whole [development] environment isn’t there people won’t move up a level.”
Aitelli says that there are already SystemC sign-off flows, but they are restricted to cycle-accurate designs, which in turn restricts the freedom of the C synthesis tool and also reduces simulation throughput.
Antun Domic, senior vice president and general manager of Synopsys, argued that although he sees more C-based design for modelling and design exploration “I do see RTL as the starting point” for chip implementation.
Domic says Synopsys has tried various things to improve the capacity of the RTL-based flow to handle larger designs. For example, it has developed its place and route tools to be more forgiving of incomplete information so that they can be run earlier in the design elaboration process. And although a full RTL-to-gates synthesis, overcoming all errors and meeting all constraints, still demands very complete data to work properly, Synopsys is enabling more design exploration through its Design Compiler Explorer, which can create netlists that may still contain errors and warnings but that are within 10% of the final netlist. These netlists can be produced six times faster than doing a ‘perfect’ RTL to gate synthesis.
Design Compiler has also been adapted to do approximate routing, and tuned so that it can target different design intents, such as pure performance, performance at low power, and low power.
Domic also pointed out that design complexity no longer resides at the leading process nodes alone – producing an SoC based on an ARM M0 microcontroller core on a 180nm process, targeted for low power and minimal area and supporting 10 voltage domains, is as much of a challenge as working on a 28nm process: “Complexity is not a synonym for the latest process nodes.”
Bipin Nair, general manager of the high tech business unit of Infotech Enterprises, based in Bangalore, argued that with rising design and verification costs came the need to tackle design complexity on three fronts.
The first is the technology front: making greater use of larger IP blocks; making greater use of platform-based design; doing better implementation planning; working with more accurate estimation tools; reducing the impact of software verification through intelligent test benches; and reducing verification effort through the use of executable specs and verification IP.
The second focus should be on people skills and development; focusing on low power design and verification methodologies; taking multicore design as standard; developing cosimulation expertise to isolate issues at analog/digital boundaries; creating tailored training courses to teach these skills; and developing the skills base of India’s Tier 2 cities to help deliver designs.
Nair’s third focus is on management development, including looking for the best ‘techno-managerial’ flows and methodologies; choosing the best tools from across the ecosystem; being prepared to augment standard EDA flows with in-house tools; developing close foundry relationships; and better project management for designs teams that are becoming larger and more geographically disperse.
“The challenge is the rapid adoption of technology advances, continued investment in people and better project management,” he added.
Gary Smith of Gary Smith EDA, who moderated the panel, said that companies designing mobile phone chips are now working with multi-platform designs, such as a core of similar complexity to Qualcomm’s Snapdragon, around which they add other similarly complex blocks to build up their chip. This approach has already been used to create one design with 104 million gates, according to Smith.
Domic said that some companies are attacking the complexity issue by changing the most important jobs in their design teams from being in charge of a critical block to being in charge of the successful implementation of the interfaces between major blocks.