Emulation is growing at a rapid rate, passing $300M in value last year. Some analysts believe that a $1B emulation market may not be that far away.
Jean-Marie Brunet is Senior Director of Marketing at Mentor Graphics with responsibility for its Veloce emulation division. He is well placed to identify both established and emerging trends.
For Brunet, the sweet spots lie around markets that target large die. That has probably long been the case, but today such markets increasingly combine a need for traditional lab-based in-circuit emulation (ICE) with the ability to exploit an increasing number of techniques available through virtualization.
Virtualization is a fast-growing segment because it frees the emulator from the lab and puts it on corporate networks. Then, it allows users to leverage features such as ‘black box’ models and apps that target particular design components and verification tasks. Brunet’s Veloce team has been particularly aggressive in boosting its Apps offering this year.
Brunet offers networking as a good example of such a market.
“Given the potential in our [Veloce] hardware, we can go up to one billion gates. For networking, we’re talking about some of the largest silicon out there – six hundred, even eight or nine hundred million gates.
“Veloce has the capacity to run extensive verification on these chips where FPGA[-based prototyping] cannot,” says Brunet.
'Shift left' with emulation
Networking also highlights the opportunities emulation offers to ‘shift left’ and pull tasks up the design flow.
“In June, we announced a partnership with Ixia. It provides the main Ethernet tester used by companies like Cisco [Systems], Broadcom, and MediaTek – all key players in networking. The big thing about Ixia is that it has superb technology but users haven’t been able to do much with that before now until they had first silicon,” says Brunet.
Under the Mentor-Ixia collaboration, the IxNetwork Virtual Edition has been integrated with the Veloce platform to deliver a cohesive flow from simulation to the lab.
“We’ve got to the point where you can run a lot of [Ixia-based] test in the emulator, recreating sufficiently faithful real-world conditions there on the DUT, using testcases that would previously have had to wait until you pushed the ‘tape-out’ button. We have written software that integrates a script and a traffic flow. It is a production-traffic generator and you can run it to verify the chip,” says Brunet.
But how does networking also harness the further benefits of virtualization? Take the three tier-one Ixia users Brunet cites. All have large international hardware and software design teams collaborating on each new product generation. The increased efficiency achieved when a single emulator (or emulator cluster) can be shared as a resource across those teams has obvious attractions.
“However, you can also think about the number of ports that are being demanded now of networking silicon. Trying to manage that challenge in a lab environment is too much; you need to virtualize much of the work,” says Brunet.
Emulation for memory
Brunet then argues that trends within the memory market also shows why emulation is growing around ICE and virtualization.
“If we take memory, historically it has been dominated by ICE-centric use-models,” Brunet says. “But that is changing: today’s memory designs still contain significant amounts of logic, but the software stacks are far more complex, and there’s more than that.
“These are not necessarily new problems for the memory guys. They have been staffing up to meet them. But these problems are not among their core competences. As virtualized emulation models have evolved, they’ve done a lot and we’ve done a lot to enable the virtualization you might want for software, in conjunction with the ICE tasks that you still carry out.”
It is here that Brunet addresses a common misconception in the debate over virtualization versus ICE. It is not about whether you use one or the other, but how you reach the right balance between the two.
“For any large SoC design; what do you have? You have a lot of standard interfaces and you can have virtualized ‘black boxes’ for all of those. But if that’s all your design has, you have to ask, ‘Where’s the differentiation?’” says Brunet.
“Well, over here, there’s another set of interfaces that are non-standard. Now, no client is going to let us as a vendor model those, because that’s their differentiation – those are their secrets. That is the kind of verification that will run through ICE.
“As a vendor, what you have to be able to say is that your emulator environment is good at both. And that’s where we’ve also spent a lot of time and research building up Veloce.”
Driving emulation toward $1B
All this poses a question: Where next after memory and networking, if the time and effort – and the magical $1B market – is to pay off.
“Automotive is very interesting,” says Brunet. “It’s got a lot of the things you look for: the die-size is increasing, the verification requirements are very tough and very broad because of [the] ISO 26262 [standard]. And the sector shows the growing need to run ‘design for security’: You can’t have malicious attacks on various automotive subsystems if you’re going to pass more control to the processors and add lots of sensors.
“But there are a few things that still need to happen. You have a lot of dedicated interfaces – CAN, Flexray, Automotive Ethernet. But what we don’t yet have, particularly for verification, is any agreed set of benchmarks. If you take the example of mobile multimedia, there you have those reference points, benchmarks like AnTuTu. Automotive doesn’t have those yet.
“As a vendor we’re expected to handle lots of different benchmarks from lots of different customers and, no, that isn’t fun or easy.
“The market though will adapt. We’ve started out in other areas like that – when I was in [Mentor’s] Calibre division on the DFM side, we originally had to work with four different ways of doing things for multi-patterning. And then, people saw the sense of unifying those things.”