What’s the shortest time in the universe?

By Chris Edwards |  No Comments  |  Posted: June 12, 2016
Topics/Categories: Blog - Embedded, IP  |  Tags: , , , , , ,  | Organizations:

“It’s the time between putting out an open-source ARM core and getting a letter from an ARM lawyer,” says UC Berkeley professor Krste Asanovic, who leads the RISC-V initiative to develop and promote an open and free instruction-set architecture (ISA).

Asanovic argued in a talk at the Design Automation Conference (DAC) last week (June 7) that the instruction sets used in practically all computers today are dismal and that designers should throw them out along with their baggage and adopt an open specification that has had the benefit of decades of experience in RISC computer architecture.

Although other processors have appeared as open-source IP, Asanovic said RISC-V is less encumbered because it is backed by a charitable foundation set up by the UC Berkeley team rather than being owned by a commercial entity. He pointed to Sparc as a contrast. Sun Microsystems offered open-source versions of Sparc cores but the architecture is now owned by Oracle, a company that has post-acquisition taken a more aggressive line on controlling use of its IP.

In a world in which programmers have flocked to proprietary instructions sets and processor architectures rather than the open offerings, the initial prognosis for RISC-V’s success seems poor. But changes in attitudes among data-center operators to the hardware they use have in recent years made the open-source hardware approach more attractive.

Open compute platforms

At the board level, the Open Compute Project has created a number of open-source designs that manufacturers and operators can adopt and modify. In a panel on open-source hardware at DAC on Wednesday (June 8), Rackspace senior engineer Aaron Sullivan explained: “Open Compute is not trying to define silicon. It aspires to something much more. There was an early attempt to integrate more tightly with silicon or build silicon in Open Compute but that mostly failed. It was because we didn’t have an agreement on what we were trying to solve. It didn’t go beyond proof of concept.”

The realization that power-hungry data centers can save energy by deploying accelerators has reignited interest in architectures that provide greater freedom in terms of licensing and usage.

Sullivan said: “We are focused on transforming our existing cloud platforms and products into a 2.0 phase where we start to contemplate heterogeneous platforms as a more standard way in which we compute.”

IBM distinguished engineer Randy Swanberg pointed to the “emerging workloads that we see from the new world of big data” that call for new architectural approaches. “Deep learning, Machine learning. There are new frameworks being born out of the open-source communities.”

Texas Tech University researcher John Leidel echoed Asanovic’s comments the day before on the cruft that commercial instruction set processors tend to acquire, exemplified by an obscure binary coded decimal instruction called AAA in the x86.

Accelerator energy savings

Leidel said: “For us the driving concern around closed-source IP is the power. What is the AAA instruction and why do I need it? I don’t want to pay for the power and area to light up the AAA instruction. I want my hardware to be much simpler and cheaper.”

Leidel’s group is developing a server SoC design tuned to the workloads used by the project’s sponsor, the US Department of Energy (DoE). “Government has decided to go down this road from a power and infrastructural point of view,” he said.

Open-source IP provides the ability to customise architectures and IP block interfaces more freely. He pointed to issues in dealing with the IP agreements around Hybrid Memory Cube as a further reason for favoring open source.

“It’s a monumental licencing problem to build new versions. God forbid that we want to put together a custom memory cube. There are issues with licensing the array, the logic required to do all the control, plus whatever custom logic I want to put down. There are five people in the room and all have better attorneys than I do. But I want to go build it,” Leidel explained.

Instruction-set choice

Swanberg said IBM recognizes the desire to create custom versions of processors that dispense with many of the operations architects develop over time to capture as many target markets as possible.

Referring to the motivations for IBM’s formation of the OpenPower initiative, Swanberg said: “There is a lot more in the Power 8 than you need for that problem. You will see this changing with Power 9. You will have multiple versions that are still targeted to your mainstream businesses but together with a more streamlined version that has less baggage to provide on-ramps for this innovation.

“However, you can’t go crazy. At some point consistency of the ISA and binary compatibility matters. You can’t have a new generation come out and all of a sudden you get a core dump because the software calls an illegal instructions or, god forbid, that you have to emulate. The instruction set is a contract between hardware and software. You have to be prepared to maintain that forever,” Swanberg added.

Although binary compatibility over time will remain important, the need to improve efficiency and the growth in managed-code environments are providing the push to cooptimize hardware and software.

“Now you can have the optimization discussion. Spark is a perfect example. It’s compiled to bytecode and runs on a JVM today. They are now moving to native-code generation. Whatever that is, that’s where we will target optimization. You are not worried about Spark itself; you are worried about the JIT code,” Swanberg explained. “You may say: ‘If you add this instruction I can speed up the virtual machine by ten or more’.”

Beyond the ISA

The code used by the virtual machine is likely to become the long-term target, said Paul Teich, principal engineer at analyst firm TIRIAS: “The processor’s instruction set may not be an edge when you move to interpretive code.”

Swanberg said the virtual machine code is a better target for co-optimization rather than procedures in an application like Spark. “If you try to optimize further up, whatever traces you based your chip on, by the time you ship that’s all stale.

“Moving outside the processor is the next level. How do we do plug-and-play acceleration? We may have a GPU-accelerated version of Spark or plug in an FPGA. That’s the approach I think we have to take for co-optimization,” Swanberg added.

Chris Aniszczyk interim executive director of the Cloud Native Computing Foundation and former Twitter engineering manager said: “At Twitter we were generating hundreds of petabytes of data a day. Anything we could have done to speed up our Spark scaling we would. Could we offload this onto GPUs?”

The need to accelerate will only get worse, Aniszczyk said: “The companies will continue to collect more data. For a long time, humans were mainly the driving factor for collecting data. As more devices come online the amount of data that will be collected will continue to grow and increase the problem of analyzing that data in real time. Google are designing their own chips to make their AI systems run faster. We will see this problem bubbling up more and more.”

Companies investing in customized architectures will face difficulties in attracting developers. Swanberg said it was the seemingly small decision to make better little-endian data handling a part of Power 8 helped boost the number of open-source software ports to IBM’s platform from code originally written for the far more prevalent x86.

Leidel said services are now available that can help bring more software developers onto an architecture. “A lot of the tools and software ecosystems will be much like what we have today. All these wonderful tools that if you port at least a portion over to your IP, users will be able to make use of that open-source hardware.

“There are companies who specialize in bringing new tools on. If I go to 1-800-CODEPLAY in Scotland a few months later I’ve got a compiler, linker and a core set of software tools.”

Open-source comms build-out

Aniszczyk said another motivation for open-source hardware IP is to support the development of a communications and compute infrastructure for the remainder of the world’s population not yet online.

“Certain companies like Facebook were unhappy with the status of hardware when it came to working with wireless stations. Traditional players asked: how do we innovate in this space? Move faster? Let’s team up and open up some designs around open basestations,” Aniszczyk said.

“Hardware companies are coming in to find ways to collaborate. Having a neutral or non-profit structure where there are clear rules on IP is critical. You will see this pattern more as hardware companies start getting involved in sharing designs.”

Leidel added: “With the growing complexity of SoCs in switching and mobile markets, more organizations will get onboard with funding these efforts. Google and Fujitsu, for example, they are in the RISC-V Foundation. Embedded mobile devices will drive a lot of funding research for open-source hardware.”

Open-source hardware is likely to be remain the preserve of corporations rather than enthusiastic part-time users because of the cost of implementation. The expense of fabbing an SoC will demand focus.

Sullivan concluded: “It needs to be something that’s useful and attracts users. Most open source development doesn’t fit that bill. It’s somoeone’s passion or science project. This work won’t be a hundred of us in a room working on different things, talking about how cool open is.”

Comments are closed.

PLATINUM SPONSORS

Synopsys Cadence Design Systems Siemens EDA
View All Sponsors