Bringing AI into our lives

By Luke Collins |  No Comments  |  Posted: February 14, 2018
Topics/Categories: Embedded - Architecture & Design, IP - Selection  |  Tags: , ,  | Organizations:

Michael Thompson is the senior manager of product marketing for the DesignWare ARC processors at SynopsysMichael Thompson is the senior manager of product marketing for the DesignWare ARC processors at Synopsys

Artificial intelligence (AI) is an umbrella term that encompasses a broad range of processing tasks including search in the cloud, robotics, speech recognition and translation, expert systems, and more.

A lot of research on AI is focused on creating machines that can learn and solve new challenges, especially at companies such as Amazon, Google, Microsoft and Yahoo, which are focusing on applications for the cloud. What’s less well known is that AI capabilities are also moving rapidly into embedded applications that run on the devices we have in our homes, cars and pockets.

Today’s embedded AI machines are similar in concept to an Amazon Echo. The Echo combines good voice recognition (perception), fast processing (decision making), and an action (response) ­– such as answering your question, playing music or switching on the lights. Most AI applications use this process of perception, decision making, and response in their deliberations

Today, the concept of AI includes understanding language, interpreting complex data, machine vision, intelligent routing in content delivery networks, and making vehicles autonomous.

AI is being heavily used in machine vision, where advances using neural network technology over the past five years have dramatically increased its accuracy. The technology has improved to the point where machines can achieve higher levels of accuracy in image recognition and other tasks than humans. Research continues in machine vision and new algorithms are being developed that are faster, more accurate, and much simpler. Figure 1 is an example of machine vision used to perform scene segmentation and object identification using a convolution neural network (CNN) algorithm.

Example of scene segmentation using machine vision (Source: Toshiba and Denso)

Figure 1 Example of scene segmentation using machine vision (Source: Toshiba and Denso)

Neural networks are also being used in image captioning, text generation, character recognition, language translation, radar, audio, and many other applications. For example, NASA is using neural network technology to analyze data from telescopes to find new planets. The system is more accurate than humans, and can analyze the data many orders of magnitude faster. Using this system, NASA recently found an eighth planet revolving around a star (Kepler-90) that is 2545 light-years away – the first known solar system outside of our own with eight planets.

A lot of the shift of AI into embedded applcations is being enabled by advances in process technology, which makes it possible to deliver much greater computing capbilities at much lower energy consumption. Shrinking process geometries are also enabling designers to put much more memory alongside advanced processors on the same chip.

Microprocessor architectures are also evolving. For example, the DesignWare ARC HS44 has a superscalar pipeline and delivers up to 5500 DMIPS per core, in a 16FF process, at worst case. It can fit into 0.06mm2 of silicon, and draws less than 50µW/MHz. It can also be scaled to offer higher performance by implementing dual-core and quad-core versions. The ARC HS family cores can be used for the application host, communication, control and pre- and post- processing that are among the tasks of an AI application. Figure 2 shows an AI development platform .

Artificial intelligence platform using ARC HS processor (Source: NARL Taiwan)

Figure 2 Artificial intelligence platform using ARC HS processor (Source: NARL Taiwan)

The nature of some AI tasks makes it worthwhile to develop specialised processors to undertake them. For example, GPUs have been used for machine vision applications, but these are being replaced by specialized processors, such as Synopsys’ DesignWare EV6x Embedded Vision Processors. The EV6x family, including the EV61, EV62, and EV64 variants, can be configured with a programmable CNN engine to perform object detection and classification for up to 4K HD video streams. The EV6x family features integrated heterogeneous processing units (Figure 3) that can be configured with up to 3520 MACs delivering up to 4.5 Tera MACs per second. The CNN core supports all the key CNN algorithms including AlexNet, GoogLeNet, ResNet, SqueezeNet, and TinyYolo. This, once almost unimaginable, level of processing power can be integrated into an SoC and used to support advanced vision processing for HD video streams in applications such as scene segmentation and classification, noted in Figure 1.

DesignWare EV6x Embedded Vision Processors include up to four vision CPUs and an optional CNN engine (Source: Synopsys)

Figure 3 DesignWare EV6x Embedded Vision Processors include up to four vision CPUs and an optional CNN engine (Source: Synopsys)

At the same time as these specialized processors are increasing in performance, the algorithms that they are running are becoming more accurate. Figure 4 shows the structure of various recent CNN algorithms developed to undertake classification tasks. They’re showing very rapid improvements in accuracy and capabilities. The error rate of 3.6% for ResNet is better than a human expert can achieve. These algorithms can run on vision processors, such as the DesignWare EV6x processors, and be integrated into an SoC for embedded applications.

Algorithmic advancement of object classification with CNNs

Figure 4 Algorithmic advancement of object classification with CNNs

Over the next ten years we can expect that AI will enable cars to drive themselves, personal assistants to become more intelligent, and natural language translation to become seamless. The continuing improvement of microprocessors, AI algorithms, and process technology will enable undreamed of applications for AI. Synopsys will keep working at the leading edge of the technology developments necessary to enables these capabilities.

Author

Michael Thompson is a product marketing manager at Synopsys.

Company info

Synopsys Corporate Headquarters
690 East Middlefield Road
Mountain View, CA 94043
(650) 584-5000
(800) 541-7737
 www.synopsys.com

Comments are closed.

PLATINUM SPONSORS

Synopsys Cadence Design Systems Siemens EDA
View All Sponsors