Emulation for system-level power verification were topics that featured highly when we interviewed Mentor Graphics’ chairman and CEO, Wally Rhines. They are also at the heart of one of our new technical posts. But while talking to Wally, we also got his views on the prospects for the 32-thru-20nm node cluster and a topic many of you are exploring, the potential role of the cloud in design.
The real cost of emulation
“What surprises people is that emulation is about two orders of magnitude less expensive than simulation on a per-cycle basis, if you’re going to do a lot of verification, For the big companies, that’s a no-brainer and it’s why we’re inundated with business right now. The whole industry is seeing a massive rise in the revenue for emulation as major players equip farms for their engineers. We’ve passed the point where you can do full verification with simulation,” says Rhines.
“But that last point isn’t just true for the top tier. The next stage is to address those companies who say, ‘Well I’m not big enough to buy a full emulator.’”
Mentor is looking to broaden the emulation market in a number of ways, the most immediate being to lower the hardware cost. “In our last generation of Veloce, we offered very small desktop emulators. They were still at $100,000 to $300,000, but you are comparing that against multi-million dollar top-of-the-range products,” says Rhines.
This, though, is where cloud computing could also come into play.
“You may have some users who say, ‘I want to buy cycles’. The cloud is one way to do that. You have the move toward virtual peripherals that suddenly makes an emulator more a piece of traditional IT computing equipment – the user doesn’t have to go in and do in-circuit emulation and plug in cards and things like that. It means we can have a server farm, and sell time to multiple users by-the-minute, or by-the-hour.”
Mentor is selling emulation cycles remotely, however Rhines notes, “I can’t say that we have users that have taken us up in any meaningful way yet. But if we move toward a totally virtual emulation of a system, we may see more of it.”
The cloud, though a hot topic, does need to resolve itself to the satisfaction of vendors and users in broad terms.
“Looking at the tools in general, people have come in and asked what we can make available in the cloud and under what conditions. And their reaction has been, ‘Well that’s very interesting.’ But then we don’t hear from them,” says Rhines.
“And right now, we don’t hear from them for mainly two reasons. Number one is their concern about security, despite the assurances we give them. Number two, the big companies have their own internal clouds. They have their own issues with those that are being well addressed by them and us in terms of how we deploy our software. But that’s separate again from taking time on, say, a Mentor farm or through a third-party.
“What you also have – and all the big three pretty much see this – is that the cloud doesn’t mean a big change in the pricing model. The core is still, if you buy big quantities, you pay less per unit than if you buy small quantities.”
So, while the cloud could be a convenient delivery mechanism in the future, everyone is moving forward thoughtfully for now.
Moving low power to the ESL
Low power is another area where all the vendors are pushing hard, for obvious reasons. However, given Mentor’s history in this segment, Rhines says there is one recent shift to note and another on the way.
“We put a great deal of effort into low power in the 1990s and the early part of this decade to take it up to the RTL level. We worked with companies like Texas Instruments and Nokia on what became UPF (Guide). That gives you a taste to how far in advance you have to be developing this kind of capability before it becomes mainstream,” says Rhines.
“It’s clear that it’s only since we have been at 45nm that power in design has become a mainstream design issue – before that, it was important but it was specialized. So the question then is, ‘What’s the next step?’”
In Mentor’s case, the answer is ESL-based low power design.
“That’s where you use tools to do trade-offs between power and performance at a system level rather than an RTL level,” Rhines says.
“You have a SystemC or C description of a design; a transaction-level analysis extracting the power information or model from the existing and reused IP and creating a procedure; a set of policies to generate approximations of the power dissipation for other actions that occur in the system; and then you’re putting it together at the system level and making good architectural tradeoffs.”
Again, Mentor’s been here for a while in the shape of its Vista product.
“That product’s been in the market for a number of years but at dozens not thousands of users. It’s just now burgeoning forth into a broader base,” Rhines says.
Rhines’ optimism from 32nm on
It is nigh-on impossible these days to talk to senior Mentor staff and not raise manufacturing – DFM, particularly through Calibre, has been a major market for the company. So what does the top man think of prospects for the latest node, particularly as the foundries and their EDA partners roll out full details on their support for it?
“I predict that the node that basically runs from 32 down through 20nm is going to be the most prolific node in history. It’s going to be such high volume and eventually such high yield that it will stimulate a new wave of design not only for new capabilities but also for redesign of existing products because you can cost reduce and add performance,” declares Rhines.
“Why do I say that? Because the capital investment has been so substantial in this node and the foundry industry – which has been spending $7B a year for more than a decade – all of a sudden in 2010 doubled its investment. It spent, in fact, more than $14B and then almost moved to triple the level, to $18B, in 2011.
“So, we’ve had an enormous investment in capacity but if you look across the industry today that capacity is full. That tells me that like every other node in history there are node problems, there are yield problems, there are throughput problems and since I’ve been through more nodes than most of your readers, I can tell you that we will get through this one too.
“But because of the sheer volume, it will become a very promising node and one that people will feel compelled to take advantage of.”
There have been issues with the time from tape delivery to first silicon getting longer, raised most notably this year – in public at least – by Nvidia. But Rhines argues that this was inevitable given the number of layers that now have to be processed going in the latest designs.
“As you do that, even as you do that with great efficiency, you cannot but increase the product development latency, the prototyping latency that’s adds to the cycle,” he says.
“While Moore’s Law is time-based, the learning curve is not. Now Moore’s Law will become obsolete because it’s an exponential and there’s nothing that really says the nodes have to occur every two-to-three years. They could be every three-to-four years or we could do half-nodes. That is all tied to how long does it take to get the infrastructure in place.
“The important things are the cost per transistor or the MIPS once you implement the node. Are you coming down the learning curve? Is the cost per transistor getting better at the correct rate as you grow the total value? As long as you do that, things are fine.
“It’s inevitable that the manufacturing prototying cycle time will increase with complexity as long as you’re just adding layers to the IC.”