Find your low-power path

By Tom Starnes |  No Comments  |  Posted: May 1, 2009
Topics/Categories: Embedded - User Experience  |  Tags:  | Organizations:

Semiconductor vendors face increasing demands to lower power consumption. This trend has intensified in the last couple of years with the rejuvenation of the ‘green’ movement. In response, the industry has been getting smarter about low-voltage design, current-saving techniques for both the circuit and process levels, and coordinated power management. Meanwhile, programmers are concentrating on energy conservation in how they use and implement features on systems, processors and other chips, while developing strategies for minimizing power in complex systems.

The popular view is that the trend is mostly concerned with mobile devices. The number of applications in this market has exploded, they sell in very high volumes and battery technologies limit their active life. However, given growing concern for natural resources, low-power design is now a critical aspect of almost every conceivable end-market.

Server farms attract attention because they involve jamming hundreds of powerful systems into a room that must be kept at a reasonable temperature. With each server consuming 100-200W and most of that power turning into heat, large air-conditioning systems and intricate air-flow and blower systems are needed. Around a half to a third of the power consumed by a server farm is used to cool the equipment.

Then, there are powerline-fed devices in the home. D-Link has developed domestic power-saving routers and switches. Initially these had scheduling capabilities that allowed Wi-Fi radios to power down during nighttime. Later, D-Link added use-sensing circuits that optimize Ethernet interfaces according to receiver presence, cable length and signal quality. The total power savings can be over 70%. Another hot topic addressed here is standby—or ‘vampire’—power.

Automobiles have been lugging around ever-greater quantities of copper cable in support of upwards of 100 microcontroller (MCU)-based subsystems to manage a power train, traction and braking, dashboard instruments, entertainment and navigation systems, and power door lifts. The less electricity these MCUs require, the smaller the amount of cabling, and thereby, the lower the weight that has to be dragged around for 100,000 miles.

In the PC market, the laptop can still only be viewed as a computer that while readily moved, should not get too far from a power outlet. With only a couple hours of life (less if the Wi-Fi is running), the battery mostly serves as a bridge as the laptop is swapped from one power point to another. However, one of the big attractions of the new PC-like ‘Netbooks’ is that they are expected to offer all-day battery-powered operation. Imagine how differently you might use one of these when freed from thinking about where next to plug in.

These are only a few examples of the benefits of bringing lower power to end-equipment. Medical electronics will require extremely efficient power consumption to enable long-term patient monitoring. Machine-to-machine, RFI and ZigBee-based mesh networks bring in another set of requirements that gain big advantages when they couple extremely low-power operation with very long battery life. Generally, society wants to ‘cut the cord’, to tidy up the spaghetti of wiring that has traditionally accompanied technology by moving to wireless functionality.

Power conservation in semiconductors

The design of a truly low-power electronic system is a multi-level problem and requires a systematic strategy to eliminate unneeded consumption. Losses can accumulate at a high level due to sloppy software or at a low one because of leaky semiconductor processes. As with many aspects of electronic design, the burden mostly rests on the chips and their designers.

The most advanced process nodes are hitting some interesting walls. Severe transistor leakage caused static power consumption to overtake dynamic power consumption as we passed 0.25um. The sad thing about static power is that it is consumed even when the chip is not being clocked, when doing nothing at all, driving up even standby consumption. To counter this problem, foundries offer ultra-low-leakage processes (or they have been developed by companies that still operate their own fabs). The use of such processes yields fairly uniform reductions in power consumption across all the logic.

Numerous techniques minimize power consumption at the circuit level. Most are proprietary, involving specially built logic circuits and the selection of right-sized transistors for various functions (and are often closely associated with the underlying process node). Since voltage levels have a direct impact on power, lowering them as far as possible in a section of the chip can greatly impact the ultimate requirements. Of particular interest are also clock designs, voltage regulation and clock distribution techniques, where the impact on power consumption can be significant. Another direct factor affecting dynamic power is clock frequency, so a careful analysis of the various clock domains as well as power rail domains can produce results when blended into gating strategies implemented in power management.

The memory system in processor-based equipment must be considered. While multi-level memory systems can efficiently feed data to the processor, power consumption is often at odds with improved performance. Every time a copy of data is made in another layer of memory, energy is consumed. As copies get stale or are never used, that energy turns to waste even if it was consumed in the background to enhance performance on the possibility the data may get used. Wider memory and caches can also be both beneficial and harmful to power consumption. Special low-power memory, whether on-chip or separate, will significantly improve power consumption while minimizing performance hits.

For embedded systems, the selection of the right processor architecture, hierarchy of processor types, and management strategy for those processors can have a big influence on power consumption. Still, less can be more.

State machines are more efficient than clocked sequential instruction set processors. But having a general-purpose processor hammer away at a signal processing algorithm will waste more energy than firing up a DSP to do this heavy lifting. If it can be well coordinated, turning on DSPs, graphics processors, vector processors, SIMDs (single instruction, multiple data), or network processors when they are needed are great ways of conserving energy by pointing the right tool at the right job.

But, as you would expect, evaluating the power requirements for each of your processors in conjunction with the performance advantages they offer to the overall system can complicate the development process. Detailed data may be difficult to obtain ahead of time for the circumstances of a given system. Perhaps the greatest advances in this area have been seen in development work for the applications processors used in 3G cell phones. ‘Balance’ is a key word here: balance the more general-purpose functions against the specialized processors without going crazy and putting half a dozen processors together and trying to coordinate them all. Remember, each processor will involve a memory system, variable-passing, resource-sharing, programming, space and cost.

The key to the efficient development of these systems lies is the power management of numerous chips within a system. Powering on or off memories, analog-to-digital converters, peripherals, and networks is likely to be a manual effort by the software dancing with some pretty intricate clock speed changes, voltage changes and power mode changes usually available within the MPUs, MCUs and DSPs. No universal standard exists for this power management, although ARM and National Semiconductor seem to be leading the way.

Your father taught you most of what you need to know about lowering power consumption: when you leave the room, turn off the lights. Any resource within a chip or in a system that is not being used at the moment should be turned off. When it needs to be used, run it full tilt and then turn it off again. If the system is idle, use the barest, littlest energy to keep it alive. It may sit unused for 23 2/3 hours a day. With a 100MHz or gigahertz processor, a lot can get done in 20 minutes and the rest of the time almost no power is needed.

Software is an important part of power consumption just as it is across so much of the system. The instruction set of the processor may determine how good the software can be at burning power. A processor architecture that spends a lot of time in tight loops might consume more power than one that is a bit more efficient. The compiler might worsen the power used by a system or have real options available that improve it.

The right operating system (OS) might tighten up power consumption where another loads lots of unnecessary modules that burn energy needlessly. And all the way down at the processor level, the time it takes an interrupt to get started or to close its routines can make one processor better than another in terms of power consumption. An OS on top of that may retain far too many variables or otherwise waste power beyond the needs of the processor for the task at hand. The application programs underneath the OS may have been written before power was a concern or simply taken from a prior project that valued performance above battery life.

It is not easy to reduce the power consumption in semiconductors. For the designers that put in the effort, though, there are plenty of market opportunities in high-performance as well as battery-operated applications that can take advantage of low-power chips.

Tom Starnes is an advisor and principal analyst for Objective Analysis (www.objective-analysis.com).

Comments are closed.

PLATINUM SPONSORS

Synopsys Cadence Design Systems Siemens EDA
View All Sponsors