ARM deploys data-center tech to study verification patterns
Software technologies such as Spark and Hadoop have helped propel cloud computing into the forefront of IT and driven demand for the processors and memory to power server blades to run them. But ARM is among those using the software to find better ways to check its own processor designs.
At the Verification Futures seminar organized by T&VS in April, Bryan Dickman, director of engineering analytics in ARM’s technology services group, described how the company is using custom applications written in Hadoop and Spark to sift through the mountains of data generated by RTL verification to find patterns that can help improve workflows.
As well as its own distributed data-processing enigne, Hadoop provides a distributed file system that can span many different computers. Apache Spark is a distributed data-processing engine that can work with Hadoop to run workloads such as machine learning and database queries.
Big data in verification
“RTL verification generates a lot of data,” Dickman said. “It’s becoming a big-data problem. We are looking at how we can take machine-learning algorithms and apply them to the data. And to design predictive workflows that improve our productivity.
“We’ve had the data collection going on at this level for probably the last two years or so. We’ve got complete project life-cycles and a lot of historical data,” he added.
The ARM team uses applications built using Hadoop and Spark to ingest the data created by various verification runs and push the results in a set of database tables that can be presented in a variety of ways to users, such as dashboard interfaces that show graphically the progress of projects.
Interactive visualizations are important in the data-analytics works, he explained, and represent a lot of the work done so far. “Good visualizations are highly illuminating when you are trying to understand data. When you want to visualize data, don’t copy-paste it into Powerpoint. Make it interactive.”
Bughunting guidance
The interactive views provided by the processed data include hierarchical views of the number of cycles consumed per week by different units within a project. These are used to assess the effectiveness of tests run during regressions. “Are we running dumb cycles?” is the question the engineers seek to answer, Dickman said.
“When you start analyzing the data, it’s interesting to see how many tests really never change state. Only a small number go from pass to fail or fail to pass.”
Dickman added: “We do a lot of analytics around bugs. We drill into bugs by methodology and areas of the design: what methodologies find bugs and when. It is interesting to see how you are finding bugs against the amount of verification you are running. You can start asking questions like: ‘I have been running lots of cycles here but no bugs discovered: were they dumb cycles?'”