Is your chip design leakier than it looks? Even though a system-on-chip (SoC) design may implement hardware-assisted security features such as protected key storage and integrated crypto engines, inadvertent errors in design could make the features as accessible to hackers as to OEMs attempting to secure their products. Jasper Design Automation has developed a tool, one in its series of formal-verification ‘apps’, that analyzes logic for backdoor methods of accessing on-chip security.
In an interview at the recent ARM TechCon, Jasper vice president of worldwide applications engineering Lawrence Loh said much of the attention in security lies in the firmware and operating-system software. “But they have one fundamental assumption: that the hardware is reliable. If the hardware isn’t secure you have a moving target.”
SoCs that implement some form of security will generally isolate these features to a particular part of the device and within specific modes. In many cases, this will be centered on ARM’s Trustzone technology.
“Hardware of this type is designed with a very important assumption. There are only a handful of ways to access the information such as private keys. At a very high level, the view is very simple: that there is no unexpected read or write source from unauthorized sources such as a test mode,” said Loh.
Hackers and security researchers have successfully poked holes in supposedly secure chips, often access thru hidden test modes, using a variety of attacks. Hackers tend to refer to the general class of attack as ‘fuzzing’ – randomly toggling pins to see if the chip behaves in unexpected ways. Security researchers Sergei Skoboratov of the University of Cambridge and Chris Woods of Quo Vadis Labs employ variants of a technique they call pipeline emission analysis to uncover hidden modes and hardware-security holes or backdoors that should probably have been disabled after manufacturing test if not removed entirely.
The job of ensuring that backdoors are not present in the final silicon is fraught with problems. “A company that releases a product today is almost certainly no longer releasing something they have done entirely by themselves. When you add test logic and connectivity between those different IPs, you are no longer sure that the hardware is secure.
“Checking test modes is one of the most important parts of this. If you see a path around the fuse, then you need to decide how to move the fuses. What people tend to do today is have a lot of people staring at a flow diagram of the design. Or they hire hackers to attempt to break in. But it’s very nondeterministic,” said Loh.
Although formal-verification technology looks to be ideal for checking whether a set of signals or registers can be accessed outside of a given mode or via test pins, this is not quite as simple as it look, Loh explained. “Usually, with formal, you say this thing is true or not true. But for this class of problem you want to check that access is possible from point A to B but not through C to D or ‘it should not go through any top level signal unless it goes through this path’. You may have one source with multiple destinations.
“Standard formal technology doesn’t check this because it doesn’t perform path based checks. We have utilized some of our path-analysis capability to perform the analysis,” said Loh.
The Security Path Verification App developed by Jasper can be used with RTL and gate-level descriptions. “If you can do it in RTL. it’s a good thing as it’s easier to debug. But some logic, such as scan, isn’t present in RTL,” said Loh, so a gate-level check would be needed to ensure that test insertion has not inadvertently introduced a sneak path.
“But often the best way to fix security is to look back at the overall security architecture: maybe I allowed too many paths. If I leave that analysis until the gate stage, there is no realistic way I can fix that.”