Synopsys has been developing its DesignWare library of general-purpose semiconductor IP for decades and has more recently begun adapting some of that IP for specific applications. The company has now decided to catalogue its IP offering as it relates to the increasingly important machine learning (ML) and artificial intelligence (AI) applications in a micro-site and brochure.
The site is divided into three main areas, covering specialized processing, memory and real-time data connectivity. The specialized processing section covers Synopsys offerings such as its embedded vision processors, which support heterogeneous compute and high-performance convolutional neural networks. It also addresses the ARC processor family, which offers scalar and vector capabilities and customizable APEX accelerators, tightly coupled memories to reduce bottlenecks, and a range of subsystems to enable key AI functions. There are also details of the company’s ASIP Designer, which enables designers to create application-specific instruction-set processors with high degrees of parallelism and specialized datapath elements.
The memory section of the micro-site and brochure covers memory IP offerings tailored to overcoming various forms of constraint including bandwidth and capacity, as well as ensuring cache coherency. DDR IP addresses capacity needs, HBM2 IP addresses the bandwidth bottleneck, and CCIX enables cache coherency with virtualized memory capabilities for heterogeneous computing and reduced latency. Foundation IP provides a variety of embedded memories, logic libraries, TCAMs, and multi-port memories, with high density, low leakage, and high-performance options.
The real-time data connectivity section of the micro-site provides links to more details about a broad range of IP that can be thought of as enabling ML and AI, such as connectivity to CMOS image sensors, microphones, and motion sensors, through widely used interfaces such as MIPI, USB/Display Port, HDMI, PCI Express, CCIX, and Ethernet. The micro-site also covers which of Synopsys’ IP offerings are available for implementation on advanced finFET process technologies, an important factor in reducing energy consumption in computationally intense ML and AI applications.
Many of the issues involved in making the right design tradeoffs when using such IP for implementing ML and AI have been covered in contributed articles on Tech Design Forum, including:
- Picking the right-sized crypto processor for your SoC
- The impact of AI on autonomous vehicles
- Optimizing power and performance trade-offs in CNN implementations for embedded vision
- Bringing AI into our lives
- Choosing between DDR4 and HBM in memory-intensive applications
- Using CCIX to implement cache coherent heterogeneous multiprocessor systems
- High-resolution visual recognition needs high-performance CNNs
- Teaching computers to recognize a smile (or frown, or grimace or…)
Synopsys also offers the DesignWare Technical Bulletin, which often covers topics related to ML and AI.
Company infoSynopsys Corporate Headquarters 690 East Middlefield Road Mountain View, CA 94043 (650) 584-5000 (800) 541-7737 www.synopsys.com
Sign up for more
If this was useful to you, why not make sure you’re getting our regular digests of Tech Design Forum’s technical content? Register and receive our newsletter free.