SoC prototyping ascends the learning curve
A panel at DATE 2013 in Grenoble heard from a combination of EDA vendors and users (including a leading FPGA vendor) that the increasing number of prototyping techniques has once more put the business on a learning curve.
Once upon a time, prototyping involved choosing some of the options on the menu, but not necessarily all. Today, increasing complexity, particularly for SoCs, means using all of them. The problem has become one of knowing how much of a resource to point at each to meet your goals and deadlines.
There are now broadly speaking five main components in the prototyping process, particularly given the increasing role played by software: virtual prototyping, simulation, emulation, FPGA prototyping and first silicon.
However, this landscape is further complicated by the increasing use of hybrid stages. Synopsys has done this by marrying its virtual and FPGA stages, so you can progressively combine maturing new design work with stable, re-used RTL. Cadence Design Systems is looking at the hybrid issue also but with greater emphasis on acceleration to marry different techniques.
“And, to be quite honest with you, we are learning together with our customers,” said Frank Schirrmeister, Senior Director at Cadence. “These hybrids are not new. There have been people, like those here from ST[Microelectronics], who have been talking about them for years. And they have published on what they have achieved. But the key is to get the return on investment for the users right. Because it is an investment to take a model at different stages and to be able to switch back and forth.
“What we’re working on now are tools to help customers make the right connections.
You need to have the transactors available but also the tools that help you to move designs through different stages.”
Joachim Kunkel, Senior VP and General Manager for Synopsys’ Solutions Group, agreed, and said that his company is drawing both on its own experiences as well as its partnerships with users. In-house, it has pulled on the needs of its IP operations to experiment with both FPGA-based and virtual prototyping.
“Internally, we need to do two things. We need to validate the IP, and we also have to develop software. We have to develop drivers and we have to do this relatively quickly before the RTL is stabilized. We do this using virtual prototyping. Then we take the software to the FPGA-based prototyping system to create the actual system for certification.
“It is a proving ground for the different techniques and we can see the pluses and the minuses – when does it work and when doesn’t it. And that’s what we bring to the customers.”
But it is a two-way street. “Our customers build SoCs that are much bigger than anything we do ourselves, obviously. And that’s where we start learning from them,” continued Kunkel.
“We can see things and we can learn what happens as complete applications prototypes get produced, working with customers and, as we do, setting up things that help them with planning and bring-up, and we incorporate that knowledge into new features.”
However, users do want and need more.
“They [the vendors] are all doing their part, making their contribution,” said Ivo Bolsens, CTO of Xilinx. “But for us virtual prototyping was a pretty big frustration because you never have the models that you want. You get your IP from different places. So, it would be good if there were some alignment on that.”
Bipin Nair, General Manager of the Hi Tech Business Unit at Infotech Enterprises, also cited models as an issue, but his call for alignment went further.
“Today, there are some efforts, but there is a need to drive more standards, for the models and the prototyping techniques generally to have more standards and interoperability. Then there’s less to worry about at the execution level,” he said.
Mix-and-match though is here to stay, endorsed by leading users though they see room for refinement. For example, the panel session also heard that while Xilinx developed much of its Zynq platform (an ARM subsystem with programmable logic) using FPGA prototypes, as you might expect, it had also made significant use of emulation.
“One of the challenges you have with FPGA is the visibility,” Bolsens acknowledged. “This was reflected in our use of the emulator. We basically did the prototyping with the FPGA and when something went wrong, we took that specific testbench to the emulator and looked at it with the visibility that we needed.
“Now if we can improve that process by having the visibility and the capability of tracing signals that you have with an emulator in virtual prototyping, that would be a big step forward.”
However, Bolsen added he still strongly feels that for early software development, the FPGA prototype is the most appropriate stage, combining sufficient performance with a stage typically marked by relatively mature RTL.
But the last word must go to Infotech’s Nair: “What I’d say is that the main challenge is being able to choose the right balance of effort versus benefit. That’s what it boils down to. And how much of it is redesign and how much of it is new design that has to be done.”