In his keynote at CDNLive, Cadence Design Systems president and CEO Lip-Bu Tan pointed to the internet as underpinning the next wave of technological growth, with the Internet of Things becoming a key driver.
Reported by Cadence’s Richard Goering, Tan used figures from Cisco to describe how data is moving into the cloud, today propelled by mobile devices. But some 50 million cars are now being shipped a year, said Tan, many with in-vehicle networks that are now beginning to connect. The Internet of Things could see 50 billion connected devices by 2020. All will be driving data into the cloud.
More is likely to turn up at DATE next week but at the Embedded World show in Nürnberg, both Cadence and Synopsys talked about the implications for designers of the rise of ‘systems of systems’. The Internet of Things is focusing attention on this issue.
Frank Schirrmeister, group director of product marketing for the system and software realisation group at Cadence, said: “When you look at tiny cores with sensors on them, they might each be transmitting 200Mbyte of data per year. You need to figure out network architectures that can handle that and configure them correctly.”
Marc Serughetti, director of business development for system-level solutions at Synopsys, said: “I think the idea of system of systems will become more important. People want to know the answer to questions such as: ‘What happens if the network dies at this point?’ Do the devices go bad or can they recognize the problem and shut themselves down properly? It becomes a multidomain type of simulation.”
The automotive environment provides an opportunity to see how some of this will pan out. The car itself is a system of system. As it gets connected to the internet, that system will get bigger but the issue of connecting together different types of simulation, emulation and prototype together are cropping up there.
On its booth at Embedded World, Cadence demonstrated a driver assistance system able to recognize roadsigns and display the speed limits and other warnings they showed on a simulated dashboard. These systems, which use multicore SoCs, FPGAs or a mixture of both, already operate with multiple cameras. They are likely to be integrated with radar and potentially systems that control the car. For example, the cruise control may be altered to take account of changes in speed limit or a drop in separation distance from the vehicle in front.
Virtual prototyping is essential, said Schirrmeister, “because these systems have a fair amount of software content that has to be developed as early as possible”.
As more of the systems become interlinked, more multidomain simulation is needed. Some of the behavioral analog simulation can be handled using wreal-number modelling. Other parts need to access other tools, such as Simulink or Vector Informatik’s CAN simulation software, and switch between different representations as needed.
“We see a lot of automotive guys building their own FPGA boards as their ECU designs are often small compared to wireless SoCs – which you may need a Palladium to emulate – and because they want to have the real sensors connected to the FPGA,” said Schirrmeister. “But they are doing things in software too, where they have virtual hardware-in-the-loop.”
Andy Richardson, head of simulation at Jaguar Land Rover, described the process at last year’s UK Matlab user conference in Birmingham, as a ‘double-V’ process. Rather than using a traditional V software development process from concept to system integration and test, the motor company does virtual integration and test first before moving onto the second V, in which the final system is tested using full hardware-in-the-loop techniques.
The virtual hardware-in-the-loop approach makes it easier to inject faults and failure modes into a simulation to see how the system fares. The ability to demonstrate graceful degradation under hardware failures is a key element of the ISO26262 safety standard that the automotive industry is now embracing.
“Whenever someone says ISO26262, you need to introduce test patterns of certain kinds and inject certain errors,” said Schirrmeister. “It’s much better done with something that you have tight control over, which tends to be simulation.”
Serughetti agreed: “If you work with an FPGA-based approach, you have the problem of deciding, if you want to inject a fault, where do you inject it? And see what is happening? With simulation, I have those capabilities and I have them non-intrusively. There are a lot of benefits to simulation.
“But on the other side, we rarely see a customer that is going to use just one technique. Each approach brings its own value in terms of debug capability and simulation time.”