Transaction level modeling

Sphere:  |  Tags: , ,

What is transaction-level modeling?

Transaction-level modeling (TLM) is a technique for describing a system by using function calls that define a set of transactions over a set of channels.

TLM descriptions can be more abstract, and therefore simulate more quickly than the register-transfer level (RTL) descriptions more traditionally used as a starting point for IC implementations. However TLM can still be used to define designs in a less abstract, more detailed way. More recently, it is increasingly used to encapsulate existing detailed functional block descriptions, creating consistent frameworks (a.k.a ‘virtual platforms’), for integrating and simulating various components in system designs that are evolving at many levels of abstraction.

TLM as a concept is not tied to one language. Early implementations, such as those used to develop mainframe computers, relied on proprietary languages. As a standard API for system modeling, TLM is implemented as a layer on top of SystemC, a modeling language and simulation environment built on top of C++ as a class library. One problem encountered by early adopters of SystemC and TLM was a lack of consistency among and across models, which made it difficult for companies to exchange them. TLM 2.0, released in June 2008, defined interfaces that have improved compatibility between models that have been developed by different teams.

What does it do and why?

TLM 2.0 gives designers a consistent way to model transactions in systems based on a memory-mapped bus architecture. The resultant virtual platforms should be functionally complete and accurate at the register level, but lack the clocks, signal pins, and implementation details that less abstract modeling techniques use, and which slow down simulation. The timing of the TLM 2.0 model will be loose or approximate.

The TLM 2.0 definition is based on three layers. The lowest involves ‘mechanisms’, which are C++ application programming interfaces (APIs) that define functions  such as blocking and non-blocking interfaces, direct memory interfaces, sockets, generic payloads, phases and more, and which ensure interoperability.

In the second layer, TLM 2.0 offers two guidelines for coding styles that define how loosely timed and approximately timed TLM 2.0 blocks are written.

At the third and highest layer of TLM 2.0 are four use-cases: software development, software performance estimation, architectural analysis and performance modeling, and hardware verification.

The coding styles have different purposes.

Loosely-timed models have just enough timing information to run operating systems and handle multicore systems. Loosely-timed models are allowed to run ahead of the master simulation clock, which speeds simulation. They are also allowed to bypass the transaction-based block-to-block interface entirely and have direct access to areas of memory within a target function, again to accelerate simulation.

Approximately timed models add just enough timing information to make the model useful for architectural exploration and performance analysis. The models run in lockstep with the master simulation clock.

The interoperability that is at the heart of TLM 2.0’s value is achieved by defining transactions using a core interface between a socket on the initiator of a transaction and a socket on its target. The data passing over the resultant link is carried in the generic payload format, which defines standard slots for the kinds of information (e.g., data, addresses) that gets passed around in memory-mapped systems. The handshaking involved to send and acknowledge receipt of the payload is carried using a ‘base protocol’, which defines a set of phases marking the beginning and end of a request and a response.

Where can I use it?

Virtual platform models created using TLM 2.0 can be used in number of ways.

– For architectural exploration and performance analysis, especially of large systems, where the efficiency of the high-level TLM 2.0 model enables rapid simulation

– As a platform for early application software development, since the TLM 2.0 definition of the hardware functionality can have enough detail for software to run on it, and be available months before a detailed RTL implementation

– As a platform for early test-bench development, which can then be carried forward as the design implementation gets more detailed by writing adapters that convert high-level transactions into signal-level transitions to drive functional blocks that have been refined into RTL descriptions

– As a ‘golden reference model’ for hardware verification, since TLM 2.0 wrappers can be used to create a consistent interface to functional blocks whose detailed implementation is evolving

– As a common language for integrating multiple verification strategies, because common verification interfaces have been defined in SystemC and SystemVerilog

– To enable reuse of functional blocks and verification suites

There’s more on the value that the consistency of TLM modeling brings to the verification task in this paper by Mentor Graphics from 2006, here.

Can I buy it?

The TLM 2.0 standard definition and related downloads are available for free here.

Who is involved?

The development of TLM 2.0 has been an industry-wide effort – more than 2,100 SystemC users and OSCI members are said to have participated in the public review of the draft standard.

TLM2.0, which builds upon System C, is an Accellera Systems Initiative standard.

SystemC was defined by the Open System C Initiative, now part of Accellera, and ratified as a standard by the IEEE.

Risk factors

TLM 2.0 enables systems to be defined at a high level of abstraction, and for that model to be used as a golden reference as functional blocks get defined in increasing detail. What is missing, though, is a direct route from  the TLM 2.0 model to implementation. Unlike RTL definitions, which are turned into gate-level description using synthesis, the transition from TLM 2.0 to RTL is a mainly manual process where the correctness of the translation relies on constant checking against the model rather than an automated translation process. However, tools have been developed that support assisted high-level synthesis and equivalence checking between SystemC models and RTL implementations.

Comments are closed.


Synopsys Cadence Design Systems Siemens EDA
View All Sponsors