Software validation strategies for connected cars
Today’s cars have about 100 million lines of code onboard – more than the Hubble Space Telescope, a Boeing 787 Dreamliner, and the Facebook app combined.
Most of this software is part of a patchwork created to meet specific needs, ranging from airbag control to satellite-radio support, not a component of an orderly schema created and managed by auto makers. The ad-hoc nature of the approach has opened the door to software vulnerabilities.
How bad is the problem? In 2010, researchers from the University of California at San Diego and the University of Washington got together to study how to hack an automobile. Back then, US cars ‘only’ had about 70 ECUs and a million lines of code – and weren’t connected to the Internet.
The team of researchers focused on the On-Board Diagnostics port fitted to every car sold in the US since 1996. They looked at its outputs, and also applied malformed inputs (known as fuzz tests) to see what happened. The good news is that they couldn’t crash the vehicle’s core system, mainly because it is made up of many subsystems linked by a CANBUS. The bad news is that the researchers could turn on and off the interior lights, reset the speedometer reading, and disable individual anti-lock brakes.
At about the same time, consumers were reporting that their cars were accelerating randomly, or braking suddenly. It turned out that this was due to software defects that in some cases led to deaths.
Other recent hacks include the remote compromise of a Jeep Cherokee on a public road, made possible by a flaw in the vehicle’s cellular communications system. Fiat Chrysler America had to recall 1.4 million vehicles to correct the problem.
A second recent hack involved a Tesla Model S, which was compromised using a combination of physical access to the vehicle and flaws in its infotainment system’s browser. This hack didn’t compromise the vehicle’s safety, but did give the researchers insights into its network and how software updates are managed.
All of this underscores the fact that modern cars contain a lot of software, much of which has been added in an ad hoc way, which has vulnerabilities and creates further exposure through its interactions with other systems.
Automakers, therefore, need to test software for its integrity and security, as well as its reliability and safety. This demands the right software development process, and careful control of the way in which third-party software is acquired and adopted.
Since so much of today’s software is assembled, rather than authored internally, concepts such as ‘software signoff’ can provide useful insights into the resultant ‘black box’ of code.
Signoff strategies are well understood in other industries. They say that none of the parts of an assembly have to be perfect, but each must meet certain quality levels. They must also be tested by both the producer and consumer, to enable rapid discovery of any issues with the code quality, specification or testing process.
There is nothing like this in software development at the moment, a shortcoming which leads to a lack of insight into the quality and security of delivered software components that can result in major risks for larger projects. This risk grows with the number of suppliers and the length of the supply chain.
Each member of that supply chain should establish signoff criteria, with thresholds that depend on factors such as the application domain, degree of quality, and security threats/concerns. Thresholds should also identify the number of known security vulnerabilities in third-party library components, the number of critical defects found by static analysis, the minimum amount of time that should be spent on bug-free model-based protocol fuzzing, and the code coverage for automated unit testing.
The automotive industry is already working on its own regulations around software quality and security. ISO 26262 does mention software testing, but doesn’t provide specific guidance. Guidelines such as MISRA C++ are too specific to one language. To address this issue, the Society of Automobile Engineers (SAE) has formed a taskforce to create a framework for the industry regarding software testing.
Software signoff is not a process validation activity, in which data is analysed throughout the development lifecycle so that the inputs lead to consistent and high-quality outputs. This approach is extremely hard to quantify for software development.
Software signoff, instead, is an inexpensive and easily quantified testing process to validate whether good and secure design standards have been followed.
To be useful, a software signoff metric must strongly correlate with quality and security, and should provide an objective and repeatable assessment resulting in a pass or fail. It should also be automatable.
By adding signoff gates to each stage of the development lifecycle using test automation and standardized criteria, software developers can ensure their code is robust and secure. Automated assessments should include the use of static analysis, dynamic analysis, fuzzing, and software composition analysis.
Synopsys is building a suite of tools that can help the automotive software supply chain apply diverse tests to its code to achieve strong validation of its robustness and quality. Here’s some of the tools that form part of that strategy.
- Static code analysis: The Coverity tool analyses source code to identify logical inconsistencies and other indications that a developer may not have implemented a feature correctly. It identifies specific types of bugs and vulnerabilities, and analyzes the program structure and logic to find indications of undesirable behavior.
- Malformed input testing (fuzz testing): Defensics is a dynamic testing tool that finds a wide variety of problems that affect security and reliability. Fuzz testing tools apply carefully designed inputs to the code’s external interfaces, to check that it behaves correctly with invalid or unexpected inputs. More sophisticated fuzzing tools will intelligently structure the data to test both random and malicious scenarios.
- Software composition analysis: Protecode produces a ‘bill of materials’ for a product, showing that all its licensing obligations have been met, that all appropriate components have been tested, and that security vulnerabilities and the need for future updates are tracked after products are released. It works by scanning binary or source code to identify components from a database of open-source projects and releases, and/or from proprietary components and releases added by users. It also guides development decisions about which components to use, when to release updates and patches, and when to upgrade components in an existing product.
- Testing and validation suites: the Test Advisor validation suites can show that code conforms with the standards defining a protocol, given valid inputs. Validation often combines unit and integration tests, as well as later-stage functional and exploratory testing.
Modern cars may have more code than a space telescope, a jet and an app combined, but we are taming this complexity. The automotive industry already tests its software to check whether any vehicle using it will work as expected, be reliable, and meet safety and performance requirements. As cars become more complex and connected, further tests will be necessary on the code that runs them to establish that its communications, updating, user authentication and access-control mechanisms are secure; to validate its third-party components; and to ensure that it can protect its data. The suite of tools described above can help achieve this, and ensure that whatever the auto industry dreams up next will be safe, secure, reliable and robust.
Author
Robert Vamosi, CISSP and security strategist at Synopsys, is the author of When Gadgets Betray Us: The Dark Side of Our Infatuation with New Technologies. He is also featured in the history-of- hacking documentary film Code 2600. For more than fifteen years Vamosi has written about information security for such publications as Forbes, CBS News, and PCWorld.