NI uses complex RF to help attack software development

By Chris Edwards |  No Comments  |  Posted: March 25, 2014
Topics/Categories: Blog - Embedded  |  Tags: , ,  | Organizations:

A massive multiple-input, multiple-output (MIMO) RF antenna processor being developed at Lund Unversity is serving as one of the testbeds for a platform-architecture view that National Instruments is developing for its LabView software development tool as the company extends its reach from lab automation further into embedded-systems development.

During his keynote speech at the DATE conference in Dresden, David Fuller, vice president of application and embedded software at NI, said the company has built a prototype of a platform view. “It’s a graphical representation of the underlying system. You might use it to document a system or explore a system architecture. But using this view you can also create a graph of the system that the software will optimise against. The idea is that you start with a model of the software with no view of what it will run on, and then map the blocks within that into FPGAs, processors and so on.”

Fuller explained: “Maybe you know affinities for I/O, so you pick processing elements close to those. But the tool also knows how to minimise communications costs, so it clumps them together where it makes sense. It can look at resource utilisation and then generate multiple possible implementations.”

To help develop the platform view, one of the target applications is a 32 x 32 MIMO system being research by Ove Edfors and Fredrik Tufvesson at Lund University. “It can have hundreds of antenna elements. In it we have software-defined radio-technology elements that are synchronised to each other on the order of milliseconds. Each one provides 700Mbit/s of data that gets aggregated across a set of PCI Express backplanes.

“The whole system consumes pretty much the maximum bandwidth on PCI Express but this is done in a way where a radio designer who doesn’t know how to architect conventional embedded systems can pull it off. They can use their preferred model of computation, which in this case is based on multirate dataflow, and then stitch it together [for the target system],” said Fuller.

Manual override

Fuller said, at least in the early stages, the plan is not to make the mapping entirely automatic. Referring to methodology changes in EDA, he said customers typically only adopted new techniques, such as logic synthesis, that not only showed an order of magnitude productivity improvement but which allowed a retrace to the older methods.

“You are always looking for a 10x improvement. Otherwise, why switch? But as well as that, most of these technologies were built on the previous technology capability. This way, you allow increased abstraction but still have escape valves to get back to the previous methodology. You can use abstraction for productivity but are still able to hand-tune for performance where necessary. It has been the same in software. When C came out, there was always an asm block to drop back into. Over time the performance of the compiler or synthesiser gets better and better so they make you not just more productive but more performant. But you always need these escape valves,” Fuller argued.

The problem with the software side of computer science (CS) now, Fuller said is that it is as much as 15 years behind hardware design in terms of methodology. “Multicore processors created a CS crisis. CS hasn’t even responded to the processors that we have today. Software really needs to respond to this challenge.”

Ideal models

“The ideal tool from a system point of view is to use the language of your choice that then separately maps to different architectures. After the mapping has occurred you can drill in and optimise it. Future tools should provide approachability and utility through abstraction, and provide application portability as the hardware evolves,” Fuller said.

The question is which language or models of computation. He said NI has taken the approach of supporting multiple language types, including C, graphical languages and mathematical notation. “I would also throw in state charts. But to do maths, you should use the notation taught at school.

“Ideally, we want the minimum number of models of computations. But the question is what is the minimal set? The industry maybe should have a debate on what they should be,” Fuller said.

Comments are closed.

PLATINUM SPONSORS

Synopsys Cadence Design Systems Siemens EDA
View All Sponsors