Bridging the analog-digital divide for verification

By Alberto Allara and Fabio Brognara |  No Comments  |  Posted: August 23, 2011
Topics/Categories: EDA - Verification  |  Tags: ,

Bridging the analog-digital divide is tough, particularly when it comes to verification. The two domains are marked by a host of differences with regard to tools, methodologies and the basic means of developing and testing designs. Analog engineers do most of their work by building and moving graphics while their digital counterparts do most of theirs by writing and compiling code. There are also the typical challenges that confront complex chip or system design projects, such as increasing demands for speed, for granular power management and for delivery within ever tightening schedules. Engineers from STMicroelectronics describe their approach to bridging the analog-digital divide by using the Open Verification Methodology on the Questa verification platform from Mentor Graphics to verify complex IP destined for integration into a hard disk drive controller.

It is generally assumed that anything with an on/off switch will eventually communicate with the world exclusively in strings of 0s and 1s. However, as long as the electronics industry is built on harnessing the laws of physics, the importance of analog signals will not go away. Nature speaks in waveforms, not regimented, binary bitstreams. Therefore, a serious challenge exists in verifying what happens at the analog-digital interface, and it is one we must address with ever greater finesse as we develop progressively more complex devices.

This paper describes the specific, granular example of the read/write channel for a hard disk drive (HDD – Figure 1). The channel’s capabilities and functions are managed by a sophisticated chip made up of multiple IP blocks. The chip has a modest analog front end to handle the magnetic waveforms read from and written to the disk. The digital portion is significantly larger and more complex, and this complexity has progressively increased in recent years. Both the analog and digital portions of the chip interact with three loops, which affect the gain, offset and asymmetry of the input waveforms. So, how does one go about verifying IP associated with the read/write channel?   

Figure 1
A typical hard disk drive. An ST chip incorporates the R/W conversion of analog waveforms into digital bitstreams and vice versa. Source: STMicroelectronics

Hard disk drives, a primer

Before answering that question, it is first worth walking through some HDD basics, as the hardware offers an archetypal example of how to manage the analog-digital hand off (Figure 2).

Figure 2
Analog-digital transformation flow in magnetic recording. Source: STMicroelectronics

The starting point is a stream of binary data to be written to a drive. The 1s and 0s in the bitstream must be encoded and then output as an analog waveform, a conversion handled by a microcontroller. The waveform is imprinted on the magnetic regions of the drive’s spinning disk (‘platter’ – Figure 3, p. 34) thus storing the binary stream.

Figure 3
A typical HDD platter. Source: STMicroelectronics

To retrieve data from the drive, the same process runs more or less in reverse, though is more complex. The head of the write/read element (‘actuator’) moves over the disk where data are stored. The pattern of magnetization on the disk changes the current in the head, a change that can be represented as a waveform. This waveform is sampled by the chip, which spits out a new stream of binary data after first applying complex data recovery algorithms to remove any inter-symbolic interference.

Perhaps the biggest challenge in verifying IP for a read/write channel being integrated into a larger SoC is working in a meaningful way across the analog and digital domains. That is because engineers have historically specialized, developing skill sets that are relevant to just one domain.

Digital verification engineers eschew graphics and spend most of their time writing and compiling vast amounts of code. By contrast, analog verification engineers look warily at code and carry out most of their work using graphical interfaces. There is the concept of ‘mixed-mode simulation’ but the phrase generally refers to implementing an analog design digitally, not truly working across domains.

Another challenge is how to approach the overall verification process in such a way that at least some of the steps and tasks can be subsequently reused. Many verification engineers seek to avoid starting from scratch for each new project. Although a custom approach may give a team a chance to demonstrate its technical prowess, a one-off verification plan is undeniably a huge sunk cost, particularly as verification complexity skyrockets. (Bear in mind here that verification complexity increases at some multiple of the rate of increase of the number of gates, a troubling formula given that gate counts are already in the region of hundreds of millions and rising.)

Components of a strategy

Upfront planning is the best hedge against nearly all thorny verification problems, including those associated with analog-digital interfaces. Rather than focusing exclusively on the read/write IP, consider instead how the IP will eventually be integrated into the SoC. The highly nuanced relationships between the various building blocks that make up today’s system designs mean that there is more to verification than just looking at the various signals expected to pass through the IP.

Another worthwhile strategy is to make the greatest possible use of standards. Over the past few decades, the increasing role for standards has been the biggest influence in changing how verification work gets done. Not that long ago, engineers would wrestle with HDL and other description languages to build and verify RTL designs. They would cobble together their own techniques— in fact, many still do—using everything from C code to SPICE simulations. All this took time and effort. Today, we reduce much of that by leaning heavily on the Universal Verification Methodology (UVM), as ratified by Accellera in February  2011.

Parallelism is a hallmark of today’s computer chips and SoCs, so it should be no surprise that it is the basis of most verification flows, too. It is most efficient for verification engineers to start writing verification components as soon as their colleagues in design begin writing the digital design in RTL. Accordingly, the digital design is verified in ever greater detail as more and more of the RTL takes shape. 

Meanwhile, the team assigned to verifying the IP’s analog front end should build a model of the analog domain. VHDL AMS works well for this. The model is provided to the digital verification engineers, who use it to close the loop with the RTL describing the digital channel front end. At this point, if standards have been adhered to, it is possible to begin true mixed mode simulations, and to do so while reusing much of the verification infrastructure.

The UVM/OVM methodology (UVM is based on OVM—the Open Verification Methodology) requires specifying verification components for each interface of the device. These components are coordinated via a multichannel sequencer operating at a higher, more abstracted layer of the OVM environment.

Here, it is possible to write components that program the registers of the read/write IP. For the ‘read’ portion of the IP, create a component that generates a model stream of bits similar to that which might be read from an HDD’s spinning disk. Another verification component extracts the information after the stream is processed to compare the output to the expected result. Similar components can be developed for the ‘write’ portion of the IP.

Next comes simulation of the bitstream, which can be accomplished by writing a pattern generator in C code and then embedding it in the appropriate verification component. The component is represented by the Head and Media Transactor box, labeled ‘HMT’ in Figure 4. To reuse the environment for mixed mode simulation, create a digital-to-analog converter layer (VHDL AMS is a good choice for this too) that converts the bitstream into a continuous signal that roughly approximates the expected input to the IP’s analog front end. 

Figure 4
Block diagram of overall verification environment. Source: STMicroelectronics

Defining successful verification

Engineers are always asked some variation on this question: “How do you know your approach is successful?” To answer, we’ll give an anecdote from our own experience, greatly facilitated by Mentor Graphics Questa ADMS, which fit our flow particularly well.

Using the methodology we built, we found bugs in the digital domain, and this was very much expected as it was the main purpose of the digital verification environment. However, we also found bugs in the analog domain that our analog design and verification colleagues missed.

Our analog success appears to stem from our effort to feed a pattern very similar to one that might actually be read from a disk into the analog front end. By contrast, in analog-only verification, the simulated patterns are often simple and fairly symmetrical, two characteristics that are not inherent to the complex, sprawling and often asymmetrical waveforms produced by magnetic fields.

Ultimately, the goal of verification at the analog-digital interface is to find problems with a design before it is built and those problems are then uncovered by customers. Any methodology that seems to ratify a project as ‘mostly okay’ is more likely to be pointing to a flawed verification strategy than proof of a good design. That is why our preferred indicator of success here—the discovery of bugs that others had overlooked—is perhaps the best.

Alberto Allara and Fabio Brognara are verification engineers at STMicroelectronics in Milan, Italy.

STMicroelectronics
39, Chemin du Champ des Filles
Plan-Les-Ouates
Geneva
Switzerland
CH 1228

W: www.st.com
T: +41 22 929 29 29

Comments are closed.

PLATINUM SPONSORS

Synopsys Cadence Design Systems Siemens EDA
View All Sponsors