[ESL Modelling] TLM (2)

SystemC 1.0 offered basic capabilities for threading and subroutine calls through channels, mainly designed for net-level modeling. However, SystemC 2.0 greatly simplified these tasks, and TLM 2.0 (introduced in July 2008) addressed the interface inheritance issues of TLM 1.0 by introducing TLM convenience sockets, as outlined in Table 5.1. TLM 2.0 also defined the generic payload, promoting interoperability between IP blocks from different vendors. Additionally, it established principles for memory and garbage ownership, timing, and fast backdoor access to RAM models.

Key Features of TLM 2.0

TLM 2.0 emphasizes general bus operations rather than method names tailored to specific applications. Dispatching within the target IP blocks is based on address fields, similar to real hardware, where different addressable registers or methods handle various functions such as reset and write.

Generic Payload

The generic payload in TLM 2.0 is a standardized data structure designed to facilitate and standardize data transfer between hardware IP blocks, ensuring compatibility across different vendors. Key concepts include:

  • Data Packet (Payload): A generic payload acts as a packet containing data, represented as a record (structure or object) in memory.Includes Various Information: The payload contains details like the data address, the data itself, command type (e.g., read, write), data length, byte enable information, and streaming width. These fields define what data is being transferred and how it should be processed.Socket-Based Data Transfer: The generic payload is transmitted between IP blocks (initiators and targets) via sockets in a SystemC TLM model. For example, an initiator IP block creates a generic payload to read or write data to a target IP block using a socket.Extensibility: Beyond its standard fields, the generic payload can be extended with custom fields to suit specific bus protocols or hardware characteristics, allowing for more complex data transfers.Type Safety: Leveraging C++’s strong type system, the generic payload can manage various data structures while maintaining type safety. This allows handling diverse data types, such as USB request blocks (URBs), network frames, and cache lines, all through the generic payload.Reflects Real Hardware Behavior: The structure and usage of the generic payload mimic actual hardware bus operations. For example, accessing specific addresses in hardware registers via an address field is analogous to accessing specific memory addresses through the generic payload.

Overall, the generic payload in TLM 2.0 is a flexible and extensible data structure designed to standardize data transfers between various hardware IP blocks, enhancing communication and interoperability in complex SoC designs.

TLM 2.0 Data Transfer Structure

The TLM 2.0 data transfer structure includes generic payloads and sockets:

1. Generic Payload (C Structure)

The left part of the diagram shows the primary fields of the generic payload, represented as a C structure:

  • Command: Specifies the action to perform, such as read, write, load-linked, or store-conditional.Address: Indicates the memory address for reading or writing data.Data Pointer: A pointer to the memory address where the data to be transferred is stored.Data Length: Specifies the length of the data to be transferred.Byte Lane Info Pointer: Points to byte lane information, indicating which bytes of the data are active during transfer.Response: Stores the response status, indicating whether the command was successfully executed.Misc + Extensions: Allows for additional, use-case-specific information, making the payload adaptable.
  • 2. System Components

The right part of the diagram illustrates the data flow within a TLM 2.0 system:

  • Initiator (e.g., CPU): The component initiating data transfers or requests, such as a CPU.Intermediate Component (e.g., Bus/Cache): Components like buses or caches that act as intermediaries in the data path. In the example, “busmux0” serves this purpose.Targets (e.g., Memory, I/O Device): The destinations of the data transfers initiated by the initiator, such as memory or I/O devices.
  • 3. Data Transfer Flow
  • Payload Transfer: The initiator creates a generic payload and passes it by pointer through intermediate components. The payload itself is not copied, improving efficiency by passing the reference.Data Access: The actual data is accessed via the memory address pointed to by the payload, rather than being embedded within it.Command Execution: The initiator executes commands (e.g., read/write) that target specific memory addresses or devices based on the address field.Delay Management: A delay variable, managed by the initiator thread, can be adjusted by other components or synchronized with the EDS kernel.

This structure demonstrates how TLM 2.0 uses generic payloads and sockets to facilitate communication between initiators, intermediates, and targets within a system. Each component can modify or reference the payload to perform necessary tasks, ensuring efficient data transfer and processing.

Example: Connecting an SRAM as a Bus Target

Consider connecting a small SRAM as a bus target, defined as an SC_MODULE. The first step involves defining the target socket in the header file:

SC_MODULE(cbgram) {
    tlm_utils::simple_target_socket<cbgram> port0;
    ...
};

This includes the constructor:

cbgram::cbgram(sc_module_name name, uint32_t mem_size, bool tracing_on, bool dmi_on) 
    : sc_module(name), port0("port0"),
      latency(10, SC_NS), mem_size(mem_size), tracing_on(tracing_on), dmi_on(dmi_on) {
    mem = new uint8_t[mem_size]; // Allocate memory for storing data.
    // Register the callback to handle b_transport interface method calls.
    port0.register_b_transport(this, &cbgram::b_transact);
}

The constructor registers the socket and various callbacks. Here, it registers b_transact as the blocking (b_transport) entry point:

void cbgram::b_transact(tlm::tlm_generic_payload& trans, sc_time& delay) {
    tlm::tlm_command cmd = trans.get_command();
    uint32_t adr = static_cast<uint32_t>(trans.get_address());
    uint8_t* ptr = trans.get_data_ptr();
    uint32_t len = trans.get_data_length();
    uint8_t* lanes = trans.get_byte_enable_ptr();
    uint32_t wid = trans.get_streaming_width();
    
    if (cmd == tlm::TLM_READ_COMMAND) {
        ptr[0] = mem[adr];
    } else {
        ...
    }
    
    trans.set_response_status(tlm::TLM_OK_RESPONSE);
}

This is the minimal working C++ code. To connect the initiator to the target, use the bind method during instantiation, establishing the connection topology between socket instances:

busmux0.init_socket.bind(memory0.port0);
busmux0.init_socket.bind(memory1.port0);
busmux0.init_socket.bind(iodev0.port0);

This outlines the set of convenience sockets defined by TLM 2.0. The problem of multiple ports of the same type is solved by multi-sockets, which can be bound multiple times as shown in the code above. Passthrough sockets can forward generic payload references to subsequent TLM calls, directly reflecting the behavior of interconnect components passing data flits. An initiator can specify which binding of a multi-port to send a message to by providing an integer to the overloaded subscription operator:

int n = ...; // Binding index to send the message.
output_socket[n]->b_transport(trans, delay);

Sockets can register and call both blocking and non-blocking counterparts. They can map calls to other forms if the required target is not registered. Additionally, reverse channels allow targets to call methods on initiators, useful for operations like cache snoops and line invalidations where an L1 cache acts as an initiator performing actions on an L2 cache.

Similar Posts