Computer vision is a very dynamic part of the artificial intelligence (AI) market. It is attracting massive amounts of funding largely because it targets a very wide range of applications (Figure1).
This two-part series will look at the role high level synthesis (HLS) technology plays in the development of computer vision systems. Part One describes key design challenges and how HLS addresses them. Part Two focuses on flow implementation and provides results from a recent case study.
Computer vision challenges
Design for computer vision is tough. Here are some of the more important challenges.
- Computer vision systems use computationally intensive convolutional neural networks (CNNs) for training and inference. These are testing the performance limits of standard platforms (e.g., GPU, CPU), and making the case for ‘domain-specific’ alternatives.
- Because many computer vision applications are safety-critical or time-sensitive, they undertake more edge processing than other AI technologies and thus must be done within a tight power budget.
- The market is still evolving rapidly. Alongside the computational load, algorithmic complexity is increasing all the time.
According to Mike Fingeroff, HLS Technologist at Mentor, a Siemens Business, this type of environment requires a design flow that has three core qualities:
- It accelerates schedules.
- It allows for efficient hardware evaluation.
- It enables late stage changes.
HLS for computer vision
Let’s consider those three requirements of a computer vision design infrastructure in turn.
HLS shortens design cycles by raising design abstraction above RTL, typically using SystemC, C or C++ to define the project before synthesis. This is especially useful for computer vision because of a useful alignment between the hardware and algorithmic design environments. Badru Agarwala, General Manager with Mentor, explains:
“Algorithm developers prefer to write code in C++, do not want to learn register transfer languages – such as Verilog or VHDL – and they do not want to use the tools and methodologies required for the hardware implementation process,” he writes. “To address this problem, some algorithm developers write their code in C++ and then use high-level synthesis tools.”
In this way, the traditional benefits of HLS are further amplified.
The rate of change in computer vision algorithms makes system definition tricky. CNNs are computationally intensive and as algorithms are refined, it can therefore be difficult to get the optimal alignment between hardware and software. Here, as Agarwala again explains, some of the inherent advantages of HLS in terms of ‘what if’ analysis are important.
“Developers… quickly make tradeoffs and then automatically generate RTL.
“These developers can try out multiple implementations of new algorithms, explore the performance and power consumptions of implementing these algorithms in hardware or software, and examine the tradeoffs of running on ASICs, FPGAs, CPUs,or GPUs.
“Tradeoffs are also explored by varying the number of processor cores and experimenting with memory sizes and configurations.”
Time saved through design abstraction and the speed of the HLS process gives hardware and software a lot of latitude here.
The evolution of computer vision implies that sometimes projects will need to cope with changes very late in the day. They may need a final tweak to the algorithm, or an increase/reallocation in resources to realize an unfamiliar application.
“Teams need a way to design and verify an algorithm while the specifications and requirements evolve without starting over every time there is a change. HLS flows allow this by using constraints and directivews that guide the process, while leaving the algorithm unchanged,” writes Agarwala.
“For example, at the last minute, a team decides to change the clock speed. That constraint is changed and the HLS tool regenerates the RTL according to the new clock speed, with no change to the C++ algorithm.
“This type of late-stage change is not possible in a traditional RTL glow, yet with HLS the RTL is rebuilt at the push of a button.”
A tried-and-tested HLS flow captures the three qualities (Figure 2).
Given the complexity of all AI development – and particularly the challenges computer vision poses – HLS provides a platform that brings together the software/algorithm and hardware teams so that they can collaborate directly on an essentially consistent platform.
Part Two of this series looks at how these project management and methodological goals can be realized in practice, and refers specifically to the development of computer vision IP for deep learning and inference object detection at Chips&Media.