Ten key tips for effective memory verification

By Nasib Naser |  No Comments  |  Posted: October 19, 2015
Topics/Categories: IP - Selection, EDA - Verification  |  Tags: , , , , ,  | Organizations:

Nasib Naser is senior staff corporate applications engineer in the verification group for Synopsys.Nasib Naser is senior staff corporate applications engineer in the verification group for Synopsys.

Increasingly complex bus, interface and memory access protocols are being used in SoCs to help meet demands to integrate more hardware functions and supporting software within tight power budgets. Using standard protocols helps designers focus on where they can provide true differentiating value. These ‘shortcuts’ for designers add complexity for the verification team, requiring them to understand evolving protocols and find ways of showing that they are working as expected in the SoC – within short timescales.

Verification IP (VIP) can help, especially for memory implementations, providing tools that enable verification engineers to do three main things: verify that memory-controller implementations comply with standards; test an implementation against specific vendor memory components, such as DIMMs; and, at the SoC level, drive traffic for SoC verification and power analysis.

How do you choose the right memory VIP for your application? Here are ten things we think you should consider when making your choices:

1 – Ease of integration is key

One of the main reasons for using memory VIP is to get to simulation fast, which means integrating the memory verification environment with the rest of the SoC testbench and your compile flow, in what can often be a lengthy process. Once you have identified the memory VIP and reference examples you want to use, the steps involved to integrate them in your environment include:

  • Creating a basic environment to encapsulate the memory VIP elements
  • Creating a customized configuration to specify the memory type (e.g. DDR3, DDR4, DIMM)
  • Integrating the basic environment in test, and passing the configuration to the basic environment
  • Specifying the catalog part details, and ensuring that the controller and the memory model you are using are in sync
  • Creating instances of the memory interface and connecting them to the DU

Look for memory VIP that comes with good quick-start guides, comprehensive reference examples and lots of searchable documentation that makes it easier to do this integration task.

2 – Find easy ways to choose your parts

You’ll want to test your block or SoC with different memories, to work out which is the best architectural fit and to verify that your choice is working as expected in your environment.

You’ll want to be able to specify a part based on vendor name, so you can compile the environment once and then pass a part number to simulation at run-time, so you don’t have to recompile to select a different part. Part selection should also be possible based on JEDEC specs, and/or with filters based on vendor, package, density, number of ranks, data width and so on. You should be able to apply constrained randomizations to the choice of part, for example to alter the number of ranks per DIMM, or a DRAM’s width. It should also be possible to override timing parameters of a part, after its configuration has been loaded, to ensure it works with the memory controller you are using.

3 – Ensure you have control over the initialization process

Since one of the key goals here is to get to simulation fast, it’s helpful to find ways to shortcut the initialization process where possible. 

This can be done by using memory VIP that includes ways to scale down initialization timing parameters, or even to skip initialization altogether, without altering the memory’s behavior.

4 – Coverage checks are vital

How much verification is enough? This is a key issue in general verification and applies equally as much to memory verification.

One way to address this issue is to use memory VIP that comes with pre-defined covergroups, addressing issues such as memory-state transitions, training and power-down modes.

Assessing functional coverage for DDR memories is a bottom-up process, looking at signals in terms of their state, toggle, and meta data, before moving up to the transaction level, using meta data and cross coverage. Meta-coverage measures here mean characteristics such as ‘valid-to-ready delay’.

Once you have the coverage data, what’s next? One approach is to generate text-based coverage reports that can be browsed manually. Another is to back-annotate the coverage data into a verification plan spreadsheet, ideally provided with the memory VIP, which captures your coverage goals, so that once the simulation has been run the user can relate the results to the goals easily.

5 – Take charge of protocol and timing checks

These are important for finding specific behavioral and design bugs, and reassuring verification engineers that their memory controllers and interfaces are meeting the requirements of JEDEC or other relevant standards.

Checkers should report as much information about these checks, but give the users the option to turn off certain messages so that the log files can be focused on specific areas of interest. It’s helpful if checkers can also enable users to go to the root cause of a checking failure, revealing the first step that caused the failure and relating it to the relevant part of the JEDEC standard that has been violated.

6 – Ensure you have backdoor access

Running a design under simulation means that the way you access memory is constrained by the design of the system – and you’ll usually have to read and write memory using standard protocols over the standard memory bus. For verification purposes, it’s useful to be able to bypass this, in much the same way as with bypassing initialization, so that you can quickly put the design into a specific known state.

Look for verification IP that can expedite memory access by supporting initialization with 0s, 1s or a pattern of your choice, and that enables you to read or write to memory locations using peek() and poke() commands, ideally over a specified address range. It’s also useful to have quick access to mode register settings, and to be able to set, get and clear the attributes of any memory location.

7 – Make sure there are hooks for scoreboarding

Scoreboarding helps users keep track of progress towards verification goals. It relies on establishing facilities (or ‘hooks’) within key components, to provide easy access to the transaction content that underpins the score-boarding process as well as a higher level presentation, such as at the DRAM or DIMM level, to make it more easily comprehensible.

This means including monitors in the memory agent, and ensuring that the overall environment provides an analysis port to which the monitors can write completed transaction objects, once a transaction has finished. The analysis port should be able to connect to any object that requires the transaction information. It is also useful to establish call-backs, virtual methods that start out without any code in them, that offer access to objects and which can represent traffic or dataflows. Users should be able to access the call-backs, so they can add code of their own, or extend any which is already in place.

8 – Think about debug support

Debug support seems obvious, but it is important to ensure that any memory VIP you use has the infrastructure necessary to enable it, such as a debug port for extracting transaction information and the ability to report DDR transactions in the simulation log file or a separate trace file.

9 – Look for protocol-centric debug

It is also helpful to have a sophisticated way of looking at this data, so that its message doesn’t get lost in the ‘noise’ of a flood of data. Synopsys’ DesignWare Protocol Analyzer and/or Verdi PA enable users to get a simplified view of protocol activity, and look at multiple protocols at once. These tools help users find errors in a protocol-centric view, and provides detailed information, specified through filtering, on demand.

These tools have many other capabilities, such as synchronization between log files and back-annotation of errors into the protocol view. They support synchronized view between transactions and signals. Using the DesignWare Protocol Analyzer there are links to Verdi and DVE, and automation to load signals from a protocol view to a wave view, and carry out synchronized zoom, scroll etc.

It can help users understand protocol traffic, provide a summary of executed transactions and memory states, and offer detailed transaction viewing. There’s also a simulation log-file timeline display.

10 – Synchronized transaction and waveform viewing is important

Memories are driven by signals, but there is a substantial gap between what happens to an individual signal and its role in implementing a high-level protocol. One way to bridge this distance is to be able to look at activity in a protocol analyzer alongside a waveform view of simulation activity. By connecting and synchronizing the two views, it is much easier to understand how the behaviour of individual signals or groups of signals is affecting the implementation of a particular protocol.

More info

Synopsys Verification IP (VIP) provides verification engineers access to the industry’s latest protocols, interfaces and memories required to verify their SoC designs. Deployed across thousands of projects, Synopsys VIP supports AMBA, PCI Express, USB, MIPI, DDR, LPDDR, HDMI, Ethernet, SATA/SAS, Fibre Channel, OCP and other protocols. The Synopsys VIP solution is written in 100% native SystemVerilog to enable ease-of-use, ease of integration and high performance. It supports advanced SystemVerilog-based testbenches with built-in methodology support for UVM and includes built in verification plans, coverage and checking to accelerate coverage closure.

Find out more at www.synopsys.com/vip

Check out the related webinar: Keeping Pace with Memory Technology using Advanced Verification

Author

Nasib Naser is senior staff corporate applications engineer in the verification group for Synopsys. He has extensive experience in SoC design and verification, embedded systems design, and computer architecture. Naser spent more than 15 years in EDA where he led many customers’ design and verification projects using SystemC and later System Verilog. Currently Naser manages memory VIP engagements in North America. Naser has more than 30 years of experience as a technical applications engineer. He has previously worked at NASA/Ames, Varian, and CoWare.

Comments are closed.

PLATINUM SPONSORS

Synopsys Cadence Design Systems Siemens EDA
View All Sponsors