Does the internet of things (IoT) require a change in design techniques? A number of people involved in the EDA industry reckon it does, partly because it could seed a much richer variety of SoC design and partly because the devices will demand much greater attention to detail when it comes to issues such as energy consumption.
Chris Rowen, CTO of the IP group at Cadence Design Systems says he is looking forward to “an explosion in diversity of design that we see emerging”, although he is quick to take aim at the term IoT itself. “The IoT label doesn’t do justice to the diversity of devices being designed. At the lowest extreme you have very low data-rate sensing nodes. Then you move to higher sampling rates but where not much processing is done locally or you may have in other applications smart nodes doing exhaustive analysis. There are automotive applications such as ADAS [advanced driver assistance systems]. What we are going to see is that the IoT label will be replaced by much more specific language.”
John Heinlein, vice president of marketing in ARM’s IP division, says: “You are seeing sensor fusion. More and more people are integrating sensors side by side and bringing greater diversity to products. The number of configuration options will continue to grow.”
Even in wearables, there will be strong diversity, Heinlein adds: “We believe it’s not one category but a whole spectrum. Some have high-end graphics but it goes down to fitness wearables that are optimized for long battery life, or deeply embedded wearables that are attached to the body.”
Drew Wingard, CTO of Sonics, says there are some likely generalities: “IoT chips are in general at a lower complexity level than being constructed for smartphones and applications processors. You would think that would be good. But the problem is nobody knows what to build. We are in a situation where the silicon technology exists. But the requirements for the end applications tend to be very specific and the power requirements are so stringent that it’s difficult to attack that using the traditional techniques employed for applications processors where you tend over-integrate.”
These factors tend to point to the situation where customisation for the application is key – which points in turn to an increase in design starts. There might even be an argument for the resurgence of designs that use faster turnaround techniques that used to be in with the long defunct gate-array business. Or at least, that might be the case if was not for the other key requirements.
“The challenge is that to be as close to optimum on form factor and power envelope as possible, we require a level of integration we are not used to,” says Wingard.
ARM fellow Rob Aitken says: “A lot of these designs call for sensor fusion and analog integration. We are seeing a lot of people doing analog sensors and then integrating them into a design. You can’t really do that with EDA for Dummies. You need proper tools for the analog and digital part and you need to be conscious of your flow.
“The design It is certainly feasible using existing design tools but it is not something where you would want to buy a basic kit. A lot also revolves around power management. You need to understand how the power domains interact with each other.”
Effective integration is a particular issue for wearables today, Wingard says, a target market where traditional techniques are not getting systems makers close to where they need to be. “We have seen with wearables they suffer so much with battery life problems the manufacturers don’t get a chance to learn because the devices are so unattractive. With the watch that won’t tell time for the whole day you can’t learn [from use in the real world] because you are so far off the mark to begin with.”
Because users need to be comfortable with the size, form and feel of the device as well as the lifetime, physical prototyping does not work all that well. Wingard says: “It has to be real silicon. it can’t be a virtual platform. We can’t learn enough about these apps without getting attractive mix of features so you can learn about them.”
The result will be that first-generation parts may borrow some techniques from the world of the applications processor before the architecture is honed to the point where it can be made more cheaply. “What will be sacrificed in early chips designs is optimality in terms of cost,” Wingard claims.
“The systems company has to be able to do designs very quickly. They need optimum power but they don’t have to be optimum from a cost perspective. The die is a bit bigger but if enough is shut off you can get close on power. I don’t think companies will care if the first ones cost $10 or $15 apiece.”
Subsequent generations will have to be much more streamlined. Wingard adds: “There will be a phase where the cost matters like crazy.”
Early designs could well be made on a more advanced process, then rolled back to an older, cheaper process node if it meets the density and power targets. Foundries have responded to the need for lower-power options on mature processes. Just ahead of ARM Techcon, TSMC launched a trio of low-leakage process options for the mature midrange lying between 28nm and 55nm. With the 180nm ULL node providing an option for low-standby power for analog-heavy, digital-light IoT nodes, those midrange processes are likely to lie in the cost sweet-spot for many systems – providing reasonable density with much lower non-recurrent engineering (NRE) costs that the more advanced technologies.
Rowen says: “The specialized IoT-oriented SoCs are extreme-fit SoCs. It is all about being at the right cost and power points. They will often be built at 28nm and above not on 16nm finFET and below. And it is much more about rapid adaptation to emerging or special function applications.”
As well as leakage, the operating voltages are coming down compared to what were normal when these processes were first introduced. Aitken says: “When you take today’s design techniques and apply them to the old nodes you get these amazing results. With 180nm, for example, it’s seen lower operating voltages. There, people are looking at 0.7V. And there are people doing 0.35V at 65nm.”
However, the ultra low-voltage designs are likely to be reserved for specialized applications. Although subthreshold circuitry – which tends to lie in the region below 0.5V for most of these processes – leads to low instantaneous power, it’s not necessarily the low-energy option because the switching rates are so low that leakage starts becoming a major factor in overall consumption.
“For energy-harvesting designs, for example, sub-threshold is maybe what the doctor ordered,” Aitken says. “But mainstream design is definitely avoiding subthreshold for now. There are a lot of challenges that you have to work out. The key ones are variability and performance. Part of what we’re doing in R&D in ARM is make the experience of designing with lower voltages easier. We are getting to the point where it will get easier to get into near threshold.”
Although process choices will help, the key to energy efficiency lies at the architectural level. Rowen refers to the most likely form of architectural partitioning as “cognitive system layering”, where the most important factor is the duty cycle of each layer – the amount of time it is awake rather than in a low-leakage sleep. The key to this type of design is to prevent the applications processor from waking up as much as possible and focus the bulk of the computation on low-energy layers. As long as you know which ones they are. As the experience with subthreshold design has shown – what looks to be low power on paper may not be the most efficient solution. The problem is knowing.
Power estimation issues
“The biggest missing gap is that we don’t have sufficiently good early power estimation,” says Wingard. How do we help a system person reason about how many partitions they should make? How do they make that tradeoff? How do they construct realistic use-cases and how much power they use for the device before they build the chip? That’s the biggest challenge.
“ We don’t have the language or the algebra for describing the behavior of the systems. The concept of virtual prototyping is a great concept. But so far it’s exclusively about functional modeling to write the software. These problems are more performance-oriented,” Wingard adds.
A period of transition may be needed to develop effective architectures for the IoT. Wingard says: “We are on a part of the curve where it’s crazy to optimize cost. What i think system companies need is the equivalent of the platforms. They need good starting points when the design is done largely by assembly. Integration technologies are really key for this.”
But the experience will have to go into a series of architectural optimization steps. Rowen argues: “There is so much benefit in specialization that we won’t need to build huge dies. The shift will really go away from pushing the envelope of transistor count to pushing the envelope on architectural creativity.”
The unfortunate irony may be that, to meet power and performance targets, the lower-volume early designs will likely incur the higher NREs of more advanced processes before it becomes possible to roll back into cheaper, more mature technologies.