Jim Zemlin charts the course taken by the open source OS to preeminence in the device market and beyond.
At the Mobile World Congress in Barcelona this year, Linux blazed brighter than ever. A large number of vendors offered products based on Android, a managed, Linux-based environment for mobile devices such as phones. The event also saw two new native Linux environments make their debuts.
Samsung’s Bada stack suggests that the world’s second largest phone vendor is moving away from Windows Mobile and toward Linux for smartphones based on its TouchWhiz user interface (UI). Meanwhile, the world’s largest handset vendor, Nokia, has thrown its lot in with Intel on MeeGo, a native- and web-app environment. It combines assets from Nokia’s Maemo Linux stack and Intel’s Moblin effort. Nokia has always used Linux (via Maemo) as a platform.
Linux has played a significant role in mobile phones since 2003, when Motorola shipped its A-760 handset and began replacing its in-house real-time OS (RTOS), P2K, with Linux. Today, though, Linux’s momentum in the device market seems to be greater than ever. What lies behind the current wave of widespread adoption? And, what does the future hold for the open source OS?
Linux quietly crept up on the computing world in the mid-1990s. It was a heady time. The World Wide Web was opened for commercial use in 1995. Email was supplanting snail mail and facsimiles as the primary form of written human correspondence. And, commercial operating systems were either obscenely expensive (about $15,000 for an Irix license, for instance) or simply not stable enough.
No corporate decrees were issued heralding the arrival of Linux. Few CIOs wrote memos encouraging its adoption. But, slowly and surely, the geeks in the trenches took to it. Why? It worked. It was free. And, perhaps, because it was mildly subversive in an appealing way.
Mostly, though, it just worked. Crashes were all but unheard of, especially on systems without graphical interfaces. Uptimes (intervals between reboots) spanned multiple years, even on heavily utilized systems. So, as the Internet grew, so did Linux. It quickly became dominant in web servers, mail servers, DNS servers and other cogs in the modern Internet. With the arrival of the always-on Internet, there was a welcome place for an always-up OS. The fact that it was free was the icing on the cake.
The BSD Unix OS also rode the Internet wave. But with its relatively smaller developer pool, and lax reciprocity requirements, it had trouble keeping up. It fell behind Linux in the quantity and arguably quality of available device drivers, filesystems, application software, ready-made stacks, developer tools, and eventually, architectural ports too.
Soon enough, Linux began trickling down into corporate networks. Its stability and cost advantages led to rapid adoption across the LAN in applications such as file-and-print servers, proxy servers, firewall/routers, version control servers, backup/storage servers and other types of intranet servers. You may notice that all of the above have a kind of device-like, single-purpose nature to them. Corporate adoption in turn raised up Linux’s profile in the eyes of the world.
At some point in 1998, the Internet reached a kind of critical mass. The world was computer-happy. The dot-com bubble grew. Linus Torvalds began joking about Linux achieving “world domination.” “How long before Linux rises to dominate desktop computing too?” people began to wonder.
A long time indeed, as it would turn out. But at least one important guy already saw the future pretty clearly. Speaking to Linux user groups in 1999, Torvalds explained how the nearest land of Linux opportunity lay not on the desktop, but in devices. Desktops, he noted, were hard, because the workload is so varied and unpredictable, and perhaps even more importantly, because user habituation creates huge inertial forces.
Devices account for 98% of all microprocessors, by most estimates. And while estimates vary as to when it will happen (some say it already has), they are expected to overtake PCs as the primary means of Internet access for most people worldwide.
Early on, Linux’s robust networking stack and wide driver availability drove much adoption. Ethernet and later Wi-Fi were being grafted onto devices of all kinds, from HVAC and elevator controllers to printers and set-top boxes.
With most embedded OSs, few drivers were available, and it was expensive to commission a new one from a commercial OS supplier. With Linux, there were folks like Donald Becker cranking out new drivers constantly. If need be, you could write a
driver yourself, and if you shared it, someone else would maintain it for you. So, a huge body of available network drivers quickly emerged around Linux. That saved time and effort for device developers using the OS, and as a bonus, created a buyer’s market because you could choose among so many chips when finalizing a bill-of-materials.
Later, Linux’s extensive multimedia support and variety of UI technologies drove a second wave of adoption in devices. Consumer gadgets with PC-like interfaces began to proliferate, thanks in part to the advent of low-cost LCD technology. Whether you needed a simple frame buffer, or a rich desktop-like GUI, there was something to suit your needs. Further, there was a wide choice of media players, and a huge array of open and commercial codecs, too.
So, Linux got its foot into the device door thanks to robust networking, multimedia and UI flexibility. Then, it really started to catch on. By 2003, in terms of new design starts, it had become the top device OS, according to most market research reports.
Why such dominance?
Figuring out the precise reasons for Linux’s dominance in the device space is not that easy, in part because there are so many devices themselves. Nevertheless, let’s have a try. Here are a few factors that may have been seminal.
Linux’s brand of licensing seems to provide the best formal arrangement yet for getting people to share something important without squabbling too much over who owns what. Early fears over patent, copyright and other lawsuits proved baseless. Those willing to abide by Linux’s simple, easy-to-understand GPLv2 terms have enjoyed year after year of hassle-free use.
Another key factor was Linux’s portability. Linux was originally x86-only. However, in 1997, David S. Miller and Miguel de Icaza began working on a SPARC port, while Linus worked on a DEC Alpha port (a proof-of-concept port to Alpha having already been done in 1995 by Jim Paradis). Removing assumptions about endianness and rewriting x86-specific code (such as fsck and other assembler-heavy components) took a year or better. Once done, though, the floodgates opened.
Today, Linux supports nearly two dozen architectures, not counting 32bit and 64bit variations. Linux ports to essentially all the embedded architectures, allowing device developers to pick from the same pool of mature, well-tested open source apps as server/desktop systems integrators (though some porting may be required).
Supporting an architecture is a far cry from supporting a specific chip, however. So, another important step in device domination was the arrival of commercial Linux vendors like MontaVista and TimeSys around 1999. These companies specialized in bringing up Linux on new hardware, typically embedded ‘system-on-chip’ (SoC) processors and single-board computers. Often, developers could use the resulting board support packages (BSPs) with or without commercial tools from the same vendors. The availability of commercial support, tools and services greatly facilitated Linux’s rise in devices, but access to ready-made patches for specific parts probably helped even more.
Today, upstream processor vendors, such as ARM, test their new designs extensively with Linux using simulation tools before implementing it in actual silicon. Processor and OS development has become a two-way street, with cores optimized for specific OSs, as well as the other way around. Processor customers similarly commence Linux bring-up efforts concurrently with the chip design phase. That way, when the first samples arrive, the new hardware can be tested right away.
As you can imagine, being able to instantly modify and recompile Linux, without technical, licensing, or other obstacles, is what enables chip makers to optimize their products’ hardware/software interface in this manner. With other, proprietary OSs, it would be necessary to work with the vendor, or at least purchase a source code license (and sign a stack of non-disclosure agreements) before gaining that freedom.
Linux’s hassle-free redistributablity then enables hardware companies to ship their Linux port with their product. That, in turn, helps their customers quickly evaluate the product, under a familiar Linux environment. This is useful whether or not the customer plans to use Linux, or another RTOS. Depending on the target market, the SoC might also come with VxWorks, Windows CE, or another RTOS. But Linux is almost always there.
Similarly, peripheral chip vendors increasingly see Linux drivers as a vital component of all product launches. In the best case, they create open drivers and work with kernel.org to get them added to the mainline kernel. Then, they will be maintained by the kernel community, freeing the vendor from legacy support issues. This type of driver support results in massive industry-wide savings in time and energy for long-term device support. In other cases, some vendors may have legitimate legal concerns with regard to open drivers. All politics aside, and simply as a practical matter, Linux’s tolerance (after a fashion) for binary modules has probably also played a role in its dominance in devices, a surprising number of which still have binary drivers. However, we expect open-source drivers to become the norm as the industry adjusts to a Linux development model
One of the more interesting trends is the increasing interest of chip vendors such as Intel in delivering more complete OS stacks. Previously, a key goal was to deliver Linux evaluation kits that were good enough for customers to use as the basis for actual product development. Today, with the ever-rising value line in open source, a new goal seems to be delivering nothing less than an entire OS stack, complete with market-specific UI layers, and application development tools.
This model has long worked well in the enterprise computing market, where companies rarely develop an entire OS. Instead, they focus in on the application layer, building atop of a relatively complete stack.
Projects like MeeGo—which initially delivers no fewer than six vertical market UI layers—stand to provide a huge body of open source-licensed work. Standing on such shoulders, tomorrow’s Linux device developers should be well positioned to innovate in ways unimagined today, while minimizing development risks, and inventing the future of connected device computing in the most efficient manner possible.