How many miles does it take to verify a self-driving car? For major automakers such as Toyota, the answer is millions. In an on-stage interview at the recent Design Automation Conference in Las Vegas, Siemens Digital Industries Software president and CEO Tony Hemmelgarn cited Toyota’s plan in explaining what it could take to get reliable autonomous road vehicles. “It will take eight to ten billion miles to get to level-five autonomy. One article said Waymo is in the lead because it’s done nine million. But no-one is going to [physically] drive eight to ten billion miles.”
Waymo certainly has not had its physical vehicles cover billions of miles, at least not yet. The answer lies in extensive use of simulation technology. One of the big problems with evaluating the ability of autonomous systems to cope with road conditions is that most of the time, thankfully, driving is an uneventful affair. To probe as many edge cases as possible, the only way to get there is to construct simulations and run them on the models of the vehicle. Digital-twin technology promises to make this possible by reflecting the behavior of the physical vehicle in the virtual world.
But it is not just autonomous vehicles that can take advantage of the digital-twin approach. The digital twin provides a way to handle the increasingly complex problem of verifying all the car’s systems for safety. “Safety needs to also be looked at from a system level and this is where a full digital twin will be necessary,” says Bryan Ramirez, strategic marketing manager at Mentor, a Siemens business.
Hardware in the loop
For many years, the industry has been on a course towards the digital-twin concept. Chip design itself has made extensive use of virtual prototyping to evaluate the behavior of silicon long before it returns from the fab. Automotive engineers have applied techniques such as hardware-in-the-loop (HIL) testing and model-based development to determine how electronic control units (ECUs) are likely to handle real-world situations without risking damage to engines, bodywork and, especially, test drivers.
The ISO 26262 standard takes into account the many ways in which testing must be performed to determine whether or not a vehicle can be considered safe. The standard makes explicit reference to the V-model of development and test, in which decomposition of the design into subsystems and components proceeds down one leg of the V. The journey along the upward leg then relates tests and verification to the components, subsystems and ultimately the entire system as the project heads to completion.
The V-model is considered instrumental in the development and evaluation of safety-critical subsystems. As the vehicle systems are integrated, test at each successive level can show how they perform under different stimuli. But it is at the final system level where the rubber is going to meet the virtual road.
“The industry is making advances on how to analyze safety within sub-domains of a car,” Ramirez says. “But there are many system level safety implications where a digital twin could more efficiently and accurately analyze and optimize safety across aspects within the car.”
Beyond the vehicle
The digital twin will take into account the combination of ICs within ECUs, their software, sensors, and networking protocols. “Eventually that will extend to elements beyond the car that affect autonomous driving, such as V2X communication and other environmental factors,” Ramirez says.
At a September presentation during the company’s annual industry analysts’ conference, Siemens portfolio development executive Alexandra Francois-Saint-Cyr said simulations based on physical driving experience are already able to show ways improve the safe operation of autonomous cars.
“We are expanding ADAS testing capabilities to include not just vehicle-to-vehicle but vehicle-to-infrastructure and running validations using our own fleet of vehicles,” she claimed. A video showed how including information from street-side systems provides better data to cars when they take turns at intersections. “It shows you can increase the speed of the turning vehicle to 16mph. With vehicle-to-vehicle communications it can only turn the corner at 10mph.”
A key issue with system-level test, whether using full or partial simulation, is that of levels of abstraction. Ramirez says customers “realize that raising the level of abstraction is the only way to keep pace”.
But simply raising the level of abstraction is not entirely the answer. “One of the challenges faced with raising the level of abstraction is the trade-off in accuracy and the resulting confidence of results.
“We have overcome these challenges with normal hardware development in which the majority, or sometimes all, of the functional verification is done at the RTL level instead of relying heavily on gate-level simulations, which are notoriously slow to simulate and aren’t possible until very late in the design cycle,” Ramirez adds.
As the total gate count increases, even the abstraction of RTL is not enough. The models need to move to a higher level. “High-level synthesis (HLS) can significantly improve design and verification efficiencies. But it also raises questions: what does verification look like and how does that tie into the traditional, non-HLS development flows in order to have certainty that the design is bug free?”
Another requirement is the ability to move up and down the levels of abstraction to make it possible to probe subsystem behavior in detail, whenever it makes sense. Development teams, Ramirez says, “are looking for ways of verifying safety at a low level, for something at the IP level, but then wanting to raise the abstraction for that block through higher-level models built upon what was learned during the low level, detailed analysis”.
Although there is a clear value in using digital-twin concepts during the test and verification phases, models will also help streamline the earlier analysis and planning phases. Ramirez explains: “Until recently, safety was really looked at from two perspectives. The first was ‘spreadsheet-level, expert judgment’ and the second was the use of fault-injection simulations [to test assumptions]. These are both important steps but relying only on these creates challenges for customers.
“The spreadsheet-level, expert judgment leaves customers exposed to gaps that their experts miss. Most likely this would be found during fault injection but often that realization is too late in the design cycle,” Ramirez notes. This is generally because customers use gate-level simulations for their fault injection campaigns.
“One of the areas Mentor sees this improving is in the use of more technologies to automate the safety-analysis phase after the expert judgment but well before fault-injection campaigns. This would provide analysis against the actual design to improve confidence of the results and provide higher efficiency during the safety development process.”