Remember the design gap? It’s back

By Chris Edwards |  No Comments  |  Posted: June 27, 2018
Topics/Categories: Blog - EDA  |  Tags: , , ,  | Organizations: , ,

Fifteen years ago, the chipmaking industry became very concerned about the design gap. At the 40th Design Automation Conference (DAC) in Anaheim, Gary Smith, then an analyst with Gartner Dataquest, pointed out a problem that had emerged in the rollout of the 90nm process. Scaling according to Moore’s Law was providing the transistors but chipmakers were finding it increasingly difficult to put them to good use.

“When 90nm processes became available, there were no designs at 50 million gates [or above] though 90nm gives designers 100 million gates to play with,” Smith said in 2003. “EDA requires a major technology shift every 10 to 12 years to keep up with developments in the silicon.”

As the design gap opened up in the early 2000s, Synopsys and others bet primarily on IP reuse as the main weapon they would offer to close the design gap, rather than system-level and behavioral compilation tools. The decision was largely vindicated. IP reuse, up to the level of entire design platforms, has helped implement multi-billion transistor SoCs. But a new gap has opened up, according to DARPA, and a new shift in EDA is needed.

Andreas Olofsson, program manager in DARPA’s microsystems technology office, says Gordon Moore’s seminal article in the April 19, 1965 edition of Electronics pointed to that kind of pressure on design in a section on the third page headed “Day of reckoning”.

Olofsson’s argument is the reckoning has come in the form of design cost, which has become the go/no-go factor in OKing a project rather than manufacturing issues.

Professor Andrew Kahng of the University of California at San Diego, said at this year’s DAC in San Francisco: “Wafer cost almost always ends up a nickel or dime per square millimeter. But the design cost of that square millitmeter is out of control.”

In his keynote at DAC on Tuesday, IBM vice president for AI Dario Gil pointed to the problem being one of intense difficulty that has become critical because of pressure to get projects completed more quickly. “The design cycle may last for years,” he said, which is a problem in fast-moving, hardware-enabled areas such as machine learning. “Given that there is a renaissance going on in the world of AI, increasing automation in design is incredibly important.”

Step and repeat

Up to the end of 2016, Olofsson was CEO of Adapteva, which acheived fame when the company crowdsourced the funding of its parallel processor. Olofsson uses the Adapteva experience as a demonstration of one way in which it’s possible to cut design costs – making the most of replicated blocks.

This time around, more extensive high-level design automation may well be the answer, in line with Moore’s comments from the third page of his 1965 article: “Perhaps newly devised design automation procedures could translate from logic diagram to technological realization without any special engineering.”

Last year, DARPA put together several programs under the banner of “page three”, in reference to the relevant section of Moore’s article. “The objective is to create a no-human-in-the-loop 24-hour turnaround layout generator for system on chips, system in packages and printed circuit boards,” Olofsson says.

A different gap

The problems that faces such a project is that the nature of today’s design gap is different to that of the early 2000s. Kahng claims the problem lies in the unpredictability of design. Small changes in tool settings can lead to big differences in die area or performance. He points to the 14nm finFET implementation of the Pulpino SoC, a research device based on the open-source RISC-V architecture. A frequency change of just 10MHz on a 1GHz target can lead to an area increase of 6 per cent.

Although it’s not one that carries happy memories for those in EDA, Olofsson has resurrected the term “silicon compiler” to describe what he believes is needed to achieve a dramatic increase in the automation of design. However, DARPA has far from ruled out a further increase in IP reuse – this time based on the open-source movement, albeit the “non-viral” version based on the General Public License (GPL) that prevails in software. That is the subject of the Posh Open Source Hardware (POSH) program, using a name that seems to nod to the recursive naming of the GPL-protected GNU tools (GNU’s Not Unix). Olofsson points to RISC-V, Open Cores, and the Open Compute Project as early examples of what might be achieved using open-source hardware IP.

“In my view, you can only design so fast with productivity gains. The best gains can really be made by not designing at all. With any components that have already been used and verified we should be able to just drop them in at close to zero cost,” Olofsson says.

DARPA projects

The overall aim of the DARPA programs is to make it possible to get design costs for a large SoC down to the $2m level. Although this figure might itself be dwarfed by mask costs on leading-edge nodes, Olofsson points to the use of multiproject wafers (MPW) as a way to constrain the production costs for runs of around 10,000 units, which are the kinds of volumes that the Department of Defense typically needs.

The core program for heavily automated design is Intelligent Design of Electronic Assets (IDEA). DARPA issued its initial call for contributions for both IDEA and POSH last autumn and awarded its first contract for a “page three design” project to Northrop Grumman earlier this month (June 11, 2018).

IDEA splits into two parts. The first technical area (TA1) covers automated, unified physical design from un-annotated schematics and RTL code. DARPA hopes this will include support for automated retiming and gate-level power-saving techniques as well as test-logic insertion. The second (TA2) seeks an answer for “intent-driven system synthesis”, using a large ready-made parts database to select candidate blocks to support a high-level design.

Learned behavior

DARPA expects the systems developed under these programs to make use of techniques like machine learning and data mining. Gil described experiments in automated design as part of the SysTunSys project at IBM as one approach the industry could examine. The software would run many synthesis jobs in parallel with different parameters to try to find sweetspots automatically. He claimed the technique applied to one design improved total negative slack by 36 per cent and cut power by 7 per cent. “This was after the experts had done the best they could,” Gil claimed.

Kahng sees machine learning as one of the crucial technologies for an automated flow, applying techniques similar to those in SysTunSys. He proposed the “multi-armed bandit” where arrays of computers try different approaches in a random walk manner, each time trying to get closer to the target. The key problem is killing simulation runs or implementation steps that get stuck. To address this, a strategy modeled on blackjack seems to offer a viable approach with the refinement of waiting for three negative signals before completely killing a job that looks to be unpromising.

The use of machine learning may also help create efficient models that can predict aspects such as timing across a large number of corners, so that implementation tools can then move to answers that correlate with reality much more quickly. “We want to predict timing at corners we don’t actually analyze. Our hope is to run static timing analysis on a bounded number of corners, 14 say, and from that be able to predict all the others,” Kahng said.

Kahng said the sharing of data would be critical in making automated design successful, which may prove to be a stumbling block. In his keynote on Wednesday, Professor David Patterson of UC Berkeley pointed to open-source hardware, such as his group’s RISC-V project, as helping to boost the idea of agile design in which teams iterate very quickly.

Although researchers are taking a long-term view of building a much more automated flow, Olofsson expects the interim phase of IDEA to be complete by the end of the year with an initial integration of technologies that puts the program on its way to creating an automated silicon compiler that can achieve 50 per cent of its PPA targets. “The ultimate aim is to reach 100 per cent PPA. Maybe not better than every team in the world, but one that will beat a lot of teams in implementations,” he said at DAC.

Comments are closed.

PLATINUM SPONSORS

Synopsys Cadence Design Systems Siemens EDA
View All Sponsors