一. 分析TLM2示例的基本流程方法
1.从sc_main()开始,发现各个模块类的内部重要模块,以及对外连接的public端口或socket,理清基本has和is关系;
2.看顶层类之成员类的构造函数,理清模块类的端口连接关系;(socket是稍微融合了的接口)
3.从前往后,从发起者到目标,看每个模块的method和thread;然后再从后往前,从目标到发起者,思考nb_ b_ 传输关系。
二. 示例分析
选用示例为: D:\systemc-2.3.3\examples\tlm\at_1_phase\
仅加入了汉语注释,英文注释为示例作者所加入;真诚感谢项目作者无私开源的倾情奉献!
(1.) 看sc_main(),发现类的成员关系,理清基本has和is关系
入口函数int sc_main(int argc, char* argv)在项目同名的cpp文件中:at_1_phase.cpp
///文件at_1_phase.cpp的编译代码
//=====================================================================
/// @file example_main.cpp
///
/// @brief Example main instantiates the example system top and starts
/// the systemc simulator
///
//=====================================================================
// Original Authors:
// Bill Bunton, ESLX
// Anna Keist, ESLX
// Charles Wilson, ESLX
// Jack Donovan, ESLX
//=====================================================================
// define REPORT_DEFINE_GLOBALS in one location only
#define REPORT_DEFINE_GLOBALS
#include "reporting.h" // reporting utilities
#include "at_1_phase_top.h" // top module
#include "tlm.h" // TLM header
//=====================================================================
/// @fn sc_main
//
/// @brief SystemC entry point
//
/// @details
/// This is the SystemC entry point for the example system. The argc and argv
/// parameters are not used. Simulation runtime is not specified when
/// sc_start() is called, the examples traffic generator will run to
/// completion, ending the simulation.
///
//=====================================================================
int // return status
sc_main // SystemC entry point
(int argc // argument count
,char *argv[] // argument vector
)
{
REPORT_ENABLE_ALL_REPORTING ();
//LL::实施第一步,通过example_system_top类来理清类的关系,该类定义在文件:examples\tlm\at_1_phase\include\at_1_phase_top.h中
example_system_top top("top"); // instantiate a exmaple top module
sc_core::sc_start(); // start the simulation
return 0; // return okay status
}
最顶层系统的构成模块
从example_system_top的成员变量可以看出,宏观上本系统总共有5个子模块构成,如图Figure1:
SimpleBusAT<2, 2> m_bus; ///< simple bus
at_target_1_phase m_at_target_1_phase_1; ///< instance 1 target
at_target_1_phase m_at_target_1_phase_2; ///< instance 2 target
initiator_top m_initiator_1; ///< instance 1 initiator
initiator_top m_initiator_2; ///< instance 2 initiator
//=====================================================================
/// @file example_system_top.h
/// @brief This class instantiates components that compose the TLM2
/// example system called at_1_phase
//=====================================================================
// Original Authors:
// Anna Keist, ESLX
// Bill Bunton, ESLX
// Jack Donovan, ESLX
//=====================================================================
#ifndef __EXAMPLE_SYSTEM_TOP_H__
#define __EXAMPLE_SYSTEM_TOP_H__
#include "reporting.h" // common reporting code
#include "at_target_1_phase.h" // at memory target
#include "initiator_top.h" // processor abstraction initiator
#include "models/SimpleBusAT.h" // Bus/Router Implementation
/// Top wrapper Module
class example_system_top
: public sc_core::sc_module // SC base class
{
public:
/// Constructor
example_system_top
( sc_core::sc_module_name name);
//Member Variables ===========================================================
//LL::本例程仿真的系统中包含(has)一条总线bus,两个发起者顶层模块initiator_top,两个目标模块target,如下图所示。
private:
SimpleBusAT<2, 2> m_bus; ///< simple bus
at_target_1_phase m_at_target_1_phase_1; ///< instance 1 target
at_target_1_phase m_at_target_1_phase_2; ///< instance 2 target
initiator_top m_initiator_1; ///< instance 1 initiator
initiator_top m_initiator_2; ///< instance 2 initiator
};
#endif /* __EXAMPLE_SYSTEM_TOP_H__ */
图一:
查看initiator_top类中子模块的组成
子模块组成,看initiator_top.h;
initiator_top类,有三个主要模块组成:
tlm::tlm_initiator_socket<> initiator_socket;
select_initiator m_initiator;
traffic_generator m_traffic_gen;
//=====================================================================
/// @file initiator_top.h
//
/// @brief Initiator top module contains a traffic generator and an
/// example tlm initiator module
//
//=====================================================================
// Original Authors:
// Bill Bunton, ESLX
// Charles Wilson, ESLX
// Jack Donovan, ESLX
//=====================================================================
#ifndef __INITIATOR_TOP_H__
#define __INITIATOR_TOP_H__
#include "tlm.h" // TLM headers
#include "select_initiator.h" // AT initiator
#include "traffic_generator.h" // traffic generator
class initiator_top
: public sc_core::sc_module
, virtual public tlm::tlm_bw_transport_if<> // backward non-blocking interface
{
//Member Methods =====================================================
public:
//=====================================================================
/// @fn initiator_top::initiator_top
//
/// @brief initiator_top constructor
//
/// @details
/// Initiator top module contains a traffic generator and an example
/// unique initiator module
//
//=====================================================================
initiator_top
( sc_core::sc_module_name name ///< module name
, const unsigned int ID ///< initiator ID
, sc_dt::uint64 base_address_1 ///< first base address
, sc_dt::uint64 base_address_2 ///< second base address
, unsigned int active_txn_count ///< Max number of active transactions
);
private:
/// Not Implemented for this example but required by the initiator socket
void
invalidate_direct_mem_ptr
( sc_dt::uint64 start_range
, sc_dt::uint64 end_range
);
/// Not Implemented for this example but require by the initiator socket
tlm::tlm_sync_enum
nb_transport_bw
( tlm::tlm_generic_payload &payload
, tlm::tlm_phase &phase
, sc_core::sc_time &delta
);
//Member Variables/Objects ===========================================
public:
tlm::tlm_initiator_socket<> initiator_socket; ///< processor socket
private:
typedef tlm::tlm_generic_payload *gp_ptr; ///< Generic Payload pointer
//LL::m_request_fifo 和 m_response_fifo 是为两个内部模块的接口搭桥之用。宏观上,前者是送出去,后者是收进来。
sc_core::sc_fifo <gp_ptr> m_request_fifo; ///< request SC FIFO
sc_core::sc_fifo <gp_ptr> m_response_fifo; ///< response SC FIFO
const unsigned int m_ID; ///< initiator ID
select_initiator m_initiator; ///< TLM initiator instance
traffic_generator m_traffic_gen; ///< traffic generator instance
};
#endif /* __INITIATOR_TOP_H__ */
查看select_initiator类的重要成员
通过头文件select_initiator.h,
select_initiator类的主要成员为三个:
sc_core::sc_port<sc_core::sc_fifo_in_if <gp_ptr> > request_in_port;
sc_core::sc_port<sc_core::sc_fifo_out_if <gp_ptr> > response_out_port;
tlm::tlm_initiator_socket<> initiator_socket;
//==============================================================================
/// @file select_initiator.h
//
/// @brief This is a TLM AT initiator
//
//=============================================================================
// Original Authors:
// Bill Bunton, ESLX
// Anna Keist, ESLX
//
//=============================================================================
#ifndef __SELECT_INITIATOR_H__
#define __SELECT_INITIATOR_H__
#include "tlm.h" // TLM headers
#include "tlm_utils/peq_with_get.h" // Payload event queue FIFO
#include <map> // STL map
class select_initiator /// TLM AT select_initiator
: public sc_core::sc_module /// inherit from SC module base clase
, virtual public tlm::tlm_bw_transport_if<> /// inherit from TLM "backward interface"
{
SC_HAS_PROCESS(select_initiator);
//==============================================================================
// Ports, exports and Sockets
//==============================================================================
typedef tlm::tlm_generic_payload *gp_ptr; // generic payload
public:
//LL::两层template分别起到的作用?
sc_core::sc_port<sc_core::sc_fifo_in_if <gp_ptr> > request_in_port;
sc_core::sc_port<sc_core::sc_fifo_out_if <gp_ptr> > response_out_port;
tlm::tlm_initiator_socket<> initiator_socket;
//=============================================================================
/// @fn select_initiator
///
/// @brief Constructor for AT Initiator
///
/// @details
/// Generic AT Initiator used in several examples.
/// Constructor offers several parameters for customization
///
//=============================================================================
select_initiator // constructor
( sc_core::sc_module_name name // module name
, const unsigned int ID // initiator ID
, sc_core::sc_time end_rsp_delay // delay
);
//=============================================================================
/// @fn at_target_2_phase::initiator_thread
///
/// @brief Initiator thread
///
/// @details
/// This thread takes generic payloads (gp) from the request FIFO that connects
/// to the traffic generator and initiates. When the transaction completes the
/// gp is placed in the response FIFO to return it to the traffic generator.
//=============================================================================
private:
void initiator_thread (void); // initiator thread
//=============================================================================
/// @fn at_target_2_phase::send_end_rsp_method
///
/// @brief Send end response method
///
/// @details
/// This routine takes transaction responses from the m_send_end_rsp_PEQ.
/// It contains the state machine to manage the communication path to the
/// targets. This method is registered as an SC_METHOD with the SystemC
/// kernel and is sensitive to m_send_end_rsp_PEQ.get_event()
//=============================================================================
private:
void send_end_rsp_method(void); // send end response method
//=============================================================================
/// @brief Implementation of call from targets.
//
/// @details
/// This is the ultimate destination of the nb_transport_bw call from
/// the targets after being routed trough a Bus
//
//=====================================================================
tlm::tlm_sync_enum nb_transport_bw( // nb_transport
tlm::tlm_generic_payload& transaction, // transaction
tlm::tlm_phase& phase, // transaction phase
sc_core::sc_time& time); // elapsed time
//==============================================================================
// Required but not implemented member methods
//==============================================================================
void invalidate_direct_mem_ptr( // invalidate_direct_mem_ptr
sc_dt::uint64 start_range, // start range
sc_dt::uint64 end_range); // end range
//==============================================================================
// Private member variables and methods
//==============================================================================
private:
enum previous_phase_enum
{Rcved_UPDATED_enum // Received TLM_UPDATED d
,Rcved_ACCEPTED_enum // Received ACCEPTED
,Rcved_END_REQ_enum // Received TLM_BEGIN_RESP
};
typedef std::map<tlm::tlm_generic_payload *, previous_phase_enum>
waiting_bw_path_map;
waiting_bw_path_map m_waiting_bw_path_map; // Wait backward path map
sc_core::sc_event m_enable_next_request_event;
tlm_utils::peq_with_get<tlm::tlm_generic_payload>
m_send_end_rsp_PEQ; // send end response PEQ
const unsigned int m_ID; // initiator ID
const sc_core::sc_time m_end_rsp_delay; // end response delay
bool m_enable_target_tracking; // ? remove
};
#endif /* __SELECT_INITIATOR_H__ */
traffic_generator类的主要成员
public:
sc_core::sc_port<sc_core::sc_fifo_out_if <gp_ptr> > request_out_port;
sc_core::sc_port<sc_core::sc_fifo_in_if <gp_ptr> > response_in_port;
源代码文件traffic_generator.h
//=====================================================================
/// @file traffic_generator.h
//
/// @brief traffic_generator class header
//
//=====================================================================
// Authors:
// Bill Bunton, ESLX
// Charles Wilson, ESLX
// Jack Donovan, ESLX
//====================================================================
#ifndef __TRAFFIC_GENERATOR_H__
#define __TRAFFIC_GENERATOR_H__
#include "tlm.h" // TLM headers
#include <queue> // queue header from std lib
class traffic_generator // traffic_generator
: public sc_core::sc_module // sc_module
{
// Member Methods ====================================================
public:
//=============================================================================
/// @fn traffic_generator
//
/// @brief traffic_generator constructor
//
/// @details
/// Initializes Traffice Generator iIncluding active transaction count.
//
//=============================================================================
//LL::构造函数
traffic_generator
( sc_core::sc_module_name name ///< module name for SC
, const unsigned int ID ///< initiator ID
, sc_dt::uint64 base_address_1 ///< first base address
, sc_dt::uint64 base_address_2 ///< second base address
, unsigned int active_txn_count ///< Max number of active transactions //LL::可以同时有多个事务在被系统处理之中
);
//=============================================================================
/// @fn traffic_generator_thread
//
/// @brief traffic_generator processing thread
//
/// @details
/// Method actually called by SC simulator to generate and
/// check traffic. Generate Writes then Reads to check the
/// Writes
//
//=============================================================================
void
traffic_generator_thread
( void
);
//-----------------------------------------------------------------------------
// Check Complete method
void check_complete (void);
//-----------------------------------------------------------------------------
// Check All Complete method
void check_all_complete (void);
// memory manager (queue)
//LL::每个事务净核类中的数据指针指向的数据存储区域的大小为4个字节
static const unsigned int m_txn_data_size = 4; // transaction size
//LL::tg_queue_c是理论上所说的内存管理器类 of tlm_generic_payload的对象。
//LL::定义出即将发送的净核事务类的对象,并放在FIFO队列中。然后依次出队并发送出去
class tg_queue_c /// memory managed queue class
: public tlm::tlm_mm_interface /// implements memory management IF
{
public:
tg_queue_c /// tg_queue_c constructor
( void
)
{
}
//LL::new一个净核事务类的对象,并排队放入自己的成员m_queue之中。new一块连续的存储区域,携带事务的数据信息,
//并将地址放入刚刚new的净核事务类对象的相关成员中(unsigned char* m_data)。
void
enqueue /// enqueue entry (create)
( void
)
{ //LL:: tlm_generic_payload 的两个默认构造函数: tlm_generic_payload(); explicit tlm_generic_payload(tlm_mm_interface* mm);
//LL:: 此处时显式调用第二个构造函数
tlm::tlm_generic_payload *transaction_ptr = new tlm::tlm_generic_payload ( this ); /// transaction pointer
unsigned char *data_buffer_ptr = new unsigned char [ m_txn_data_size ]; /// data buffer pointer
transaction_ptr->set_data_ptr ( data_buffer_ptr );
m_queue.push ( transaction_ptr );
}
//LL::出队一个事务净核类的对象
tlm::tlm_generic_payload * /// transaction pointer
dequeue /// dequeue entry
( void
)
{
tlm::tlm_generic_payload *transaction_ptr = m_queue.front ();
m_queue.pop();
return transaction_ptr;
}
//LL::当事务净核类的对象是从一个内存管理器中分配而来时,release将导致这个类对象的引用计数减一,当减至0时,
//将调用tlm_mm_interface::free()把该对象释放并归还内存管理器的内存pool。这里的实现是下面定义的free()函数,将事务净核类对象的放回入队m_queue中。
void
release /// release entry
( tlm::tlm_generic_payload *transaction_ptr /// transaction pointer
)
{
transaction_ptr->release ();
}
//LL::内存池内存为空,没有tlm_generic_payload 对象存在于内存池中。此例中表示,所有的tlm_generic_payload对象都处在被系统的处理之中。
bool /// true / false
is_empty /// queue empty
( void
)
{
return m_queue.empty ();
}
//LL::返回这个内存池可管理的tlm_generic_payload对象的个数。
size_t /// queue size
size /// queue size
( void
)
{
return m_queue.size ();
}
//LL::实现tlm::tlm_mm_interface::free(),归还一个tlm_generic_payload对象给pool。
//这个pool是一个抽象的代称,此例中是用一个指针队列来做pool,std::queue<tlm::tlm_generic_payload*>
void
free /// return transaction
( tlm::tlm_generic_payload *transaction_ptr /// to the pool
)
{
transaction_ptr->reset();
m_queue.push ( transaction_ptr );
}
private:
std::queue<tlm::tlm_generic_payload*> m_queue; /// queue
};
//=============================================================================
// Member Variables
private:
typedef tlm::tlm_generic_payload *gp_ptr; // pointer to a generic payload
const unsigned int m_ID; // initiator ID
sc_dt::uint64 m_base_address_1; // first base address//LL::作用?
sc_dt::uint64 m_base_address_2; // second base address
//LL::m_transaction_queue是内存管理器类 of tlm_generic_payload 对象内存池
tg_queue_c m_transaction_queue; // transaction queue
//LL::正在处理中的事务净核类的对象的个数
unsigned int m_active_txn_count; // active transaction count
bool m_check_all;
public:
/// Port for requests to the initiator
sc_core::sc_port<sc_core::sc_fifo_out_if <gp_ptr> > request_out_port;
/// Port for responses from the initiator
sc_core::sc_port<sc_core::sc_fifo_in_if <gp_ptr> > response_in_port;
};
#endif /* __TRAFFIC_GENERATOR_H__ */
针对initiator_top模块,进入第二个步骤,分析模块间的端口连接关系。
综上,结合initiator_top类的构造函数可得下图bind关系:
构造函数源代码:
//=====================================================================
/// @file initiator_top.cpp
//
/// @brief Implements instantiation and interconnect of traffic_generator
/// and an initiator via sc_fifos for at_1_phase_example
//
//=====================================================================
// Original Authors:
// Bill Bunton, ESLX
// Charles Wilson, ESLX
// Anna Keist, ESLX
// Jack Donovan, ESLX
//=====================================================================
#include "initiator_top.h" // Top traffic generator & initiator
#include "reporting.h" // reporting macro helpers
static const char *filename = "initiator_top.cpp"; ///< filename for reporting
/// Constructor
initiator_top::initiator_top
( sc_core::sc_module_name name
, const unsigned int ID
, sc_dt::uint64 base_address_1
, sc_dt::uint64 base_address_2
, unsigned int active_txn_count
)
:sc_module (name) // module name for top
,initiator_socket ("at_initiator_socket") // TLM socket
,m_ID (ID) // initiator ID
,m_initiator // Init initiator
("m_initiator_1_phase"
,ID
,sc_core::sc_time(7, sc_core::SC_NS) // set initiator end rsp delay
)
,m_traffic_gen // Init traffic Generator //LL::本例中系统生成了两个
("m_traffic_gen"
,ID
,base_address_1 // first base address
,base_address_2 // second base address
,active_txn_count // Max active transactions
)
{//LL::通过initiator_top::m_request_fifo 把其两个成员traffic_gen和select_initiator的两个sc_port连接起来。
/// Bind ports to m_request_fifo between m_initiator and m_traffic_gen
m_traffic_gen.request_out_port (m_request_fifo);
m_initiator.request_in_port (m_request_fifo);
/// Bind ports to m_response_fifo between m_initiator and m_traffic_gen
m_initiator.response_out_port (m_response_fifo);
m_traffic_gen.response_in_port (m_response_fifo);
/// Bind initiator-socket to initiator-socket hierarchical connection
m_initiator.initiator_socket(initiator_socket);
}
//=====================================================================
/// @fn initiator_top::invalidate_direct_mem_ptr
///
/// @brief Unused mandatory virtual implementation
///
/// @details
/// No DMI is implemented in this example so unused
///
//=====================================================================
void
initiator_top::invalidate_direct_mem_ptr
( sc_dt::uint64 start_range
,sc_dt::uint64 end_range
)
{
std::ostringstream msg; // log message
msg.str ("");
msg << "Initiator: " << m_ID << " Not implemented";
REPORT_ERROR(filename, __FUNCTION__, msg.str());
} // end invalidate_direct_mem_ptr
//=====================================================================
/// @fn initiator_top::nb_transport_bw
//
/// @brief Unused mandatory virtual implementation
///
/// @details
/// Unused implementation from hierarchichal connectivity of
/// Initiator sockets.
///
//=====================================================================
tlm::tlm_sync_enum
initiator_top::nb_transport_bw
( tlm::tlm_generic_payload &payload
,tlm::tlm_phase &phase
,sc_core::sc_time &delta
)
{
std::ostringstream msg; // log message
msg.str ("");
msg << "Initiator: " << m_ID
<< " Not implemented, for hierachical connection of initiator socket";
REPORT_ERROR(filename, __FUNCTION__, msg.str());
return tlm::TLM_COMPLETED;
} // end nb_transport_bw
图:
对外部而言,initiator_top对外的一个连接通道是 该类的要给公有成员initiator_socket
改图分析了transaction的 payload数据流向,以及如何发送到initiator_top外部去的。
接下来先看看Bus的组成成员
总线类SimpleBusAT类的主要主要构成主要有两类个socket数组
public:
target_socket_type target_socket[NR_OF_INITIATORS];
initiator_socket_type initiator_socket[NR_OF_TARGETS];
#ifndef __SIMPLEBUSAT_H__
#define __SIMPLEBUSAT_H__
//#include <systemc>
#include "tlm.h"
#include "tlm_utils/simple_target_socket.h"
#include "tlm_utils/simple_initiator_socket.h"
#include "tlm_utils/peq_with_get.h"
//LL::本例中,发起者的数量是2, 目标的数量也是2
template <int NR_OF_INITIATORS, int NR_OF_TARGETS>
class SimpleBusAT : public sc_core::sc_module
{
public:
typedef tlm::tlm_generic_payload transaction_type;
typedef tlm::tlm_phase phase_type;
typedef tlm::tlm_sync_enum sync_enum_type;
typedef tlm_utils::simple_target_socket_tagged<SimpleBusAT> target_socket_type;
typedef tlm_utils::simple_initiator_socket_tagged<SimpleBusAT> initiator_socket_type;
public:
target_socket_type target_socket[NR_OF_INITIATORS];
initiator_socket_type initiator_socket[NR_OF_TARGETS];
public:
SC_HAS_PROCESS(SimpleBusAT);
SimpleBusAT(sc_core::sc_module_name name) :
sc_core::sc_module(name),
mRequestPEQ("requestPEQ"),
mResponsePEQ("responsePEQ")
{
for (unsigned int i = 0; i < NR_OF_INITIATORS; ++i) {
//LL:: 赋值给simple_target_socket的一个成员 class fw_process::m_nb_transport_ptr
target_socket[i].register_nb_transport_fw(this, &SimpleBusAT::initiatorNBTransport, i);
target_socket[i].register_transport_dbg(this, &SimpleBusAT::transportDebug, i);
target_socket[i].register_get_direct_mem_ptr(this, &SimpleBusAT::getDMIPointer, i);
}
for (unsigned int i = 0; i < NR_OF_TARGETS; ++i) {
initiator_socket[i].register_nb_transport_bw(this, &SimpleBusAT::targetNBTransport, i);
initiator_socket[i].register_invalidate_direct_mem_ptr(this, &SimpleBusAT::invalidateDMIPointers, i);
}
SC_THREAD(RequestThread);
SC_THREAD(ResponseThread);
}
//
// Dummy decoder:
// - address[31-28]: portId
// - address[27-0]: masked address
//
unsigned int getPortId(const sc_dt::uint64& address)
{
return (unsigned int)address >> 28;
}
sc_dt::uint64 getAddressOffset(unsigned int portId)
{
return portId << 28;
}
sc_dt::uint64 getAddressMask(unsigned int portId)
{
return 0xfffffff;
}
unsigned int decode(const sc_dt::uint64& address)
{
// decode address:
// - return initiator socket id
return getPortId(address);
}
//
// AT protocol
//
void RequestThread()
{
while (true) {
wait(mRequestPEQ.get_event());
transaction_type* trans;
while ((trans = mRequestPEQ.get_next_transaction())!=0) {
unsigned int portId = decode(trans->get_address());
assert(portId < NR_OF_TARGETS);
initiator_socket_type* decodeSocket = &initiator_socket[portId];
trans->set_address(trans->get_address() & getAddressMask(portId));
// Fill in the destination port
PendingTransactionsIterator it = mPendingTransactions.find(trans);
assert(it != mPendingTransactions.end());
it->second.to = decodeSocket;
phase_type phase = tlm::BEGIN_REQ;
sc_core::sc_time t = sc_core::SC_ZERO_TIME;
// FIXME: No limitation on number of pending transactions
// All targets (that return false) must support multiple transactions
switch ((*decodeSocket)->nb_transport_fw(*trans, phase, t)) {
case tlm::TLM_ACCEPTED:
case tlm::TLM_UPDATED:
// Transaction not yet finished
if (phase == tlm::BEGIN_REQ) {
// Request phase not yet finished
wait(mEndRequestEvent);
} else if (phase == tlm::END_REQ) {
// Request phase finished, but response phase not yet started
wait(t);
} else if (phase == tlm::BEGIN_RESP) {
mResponsePEQ.notify(*trans, t);
// Not needed to send END_REQ to initiator
continue;
} else { // END_RESP
assert(0); exit(1);
}
// only send END_REQ to initiator if BEGIN_RESP was not already send
if (it->second.from) {
phase = tlm::END_REQ;
t = sc_core::SC_ZERO_TIME;
(*it->second.from)->nb_transport_bw(*trans, phase, t);
}
break;
case tlm::TLM_COMPLETED:
// Transaction finished
mResponsePEQ.notify(*trans, t);
// reset to destination port (we must not send END_RESP to target)
it->second.to = 0;
wait(t);
break;
default:
assert(0); exit(1);
};
}
}
}
void ResponseThread()
{
while (true) {
wait(mResponsePEQ.get_event());
transaction_type* trans;
while ((trans = mResponsePEQ.get_next_transaction())!=0) {
PendingTransactionsIterator it = mPendingTransactions.find(trans);
assert(it != mPendingTransactions.end());
phase_type phase = tlm::BEGIN_RESP;
sc_core::sc_time t = sc_core::SC_ZERO_TIME;
target_socket_type* initiatorSocket = it->second.from;
// if BEGIN_RESP is send first we don't have to send END_REQ anymore
it->second.from = 0;
switch ((*initiatorSocket)->nb_transport_bw(*trans, phase, t)) {
case tlm::TLM_COMPLETED:
// Transaction finished
wait(t);
break;
case tlm::TLM_ACCEPTED:
case tlm::TLM_UPDATED:
// Transaction not yet finished
wait(mEndResponseEvent);
break;
default:
assert(0); exit(1);
};
// forward END_RESP to target
if (it->second.to) {
phase = tlm::END_RESP;
t = sc_core::SC_ZERO_TIME;
sync_enum_type r = (*it->second.to)->nb_transport_fw(*trans, phase, t);
assert(r == tlm::TLM_COMPLETED); (void)r;
}
mPendingTransactions.erase(it);
trans->release();
}
}
}
//
// interface methods
//
sync_enum_type initiatorNBTransport(int initiator_id,
transaction_type& trans,
phase_type& phase,
sc_core::sc_time& t)
{
if (phase == tlm::BEGIN_REQ) {
trans.acquire();
addPendingTransaction(trans, 0, initiator_id);
mRequestPEQ.notify(trans, t);
} else if (phase == tlm::END_RESP) {
mEndResponseEvent.notify(t);
return tlm::TLM_COMPLETED;
} else {
std::cout << "ERROR: '" << name()
<< "': Illegal phase received from initiator." << std::endl;
assert(false); exit(1);
}
return tlm::TLM_ACCEPTED;
}
sync_enum_type targetNBTransport(int portId,
transaction_type& trans,
phase_type& phase,
sc_core::sc_time& t)
{
if (phase != tlm::END_REQ && phase != tlm::BEGIN_RESP) {
std::cout << "ERROR: '" << name()
<< "': Illegal phase received from target." << std::endl;
assert(false); exit(1);
}
mEndRequestEvent.notify(t);
if (phase == tlm::BEGIN_RESP) {
mResponsePEQ.notify(trans, t);
}
return tlm::TLM_ACCEPTED;
}
unsigned int transportDebug(int initiator_id, transaction_type& trans)
{
unsigned int portId = decode(trans.get_address());
assert(portId < NR_OF_TARGETS);
initiator_socket_type* decodeSocket = &initiator_socket[portId];
trans.set_address( trans.get_address() & getAddressMask(portId) );
return (*decodeSocket)->transport_dbg(trans);
}
bool limitRange(unsigned int portId, sc_dt::uint64& low, sc_dt::uint64& high)
{
sc_dt::uint64 addressOffset = getAddressOffset(portId);
sc_dt::uint64 addressMask = getAddressMask(portId);
if (low > addressMask) {
// Range does not overlap with addressrange for this target
return false;
}
low += addressOffset;
if (high > addressMask) {
high = addressOffset + addressMask;
} else {
high += addressOffset;
}
return true;
}
bool getDMIPointer(int initiator_id,
transaction_type& trans,
tlm::tlm_dmi& dmi_data)
{
// FIXME: DMI not supported for AT bus?
sc_dt::uint64 address = trans.get_address();
unsigned int portId = decode(address);
assert(portId < NR_OF_TARGETS);
initiator_socket_type* decodeSocket = &initiator_socket[portId];
sc_dt::uint64 maskedAddress = address & getAddressMask(portId);
trans.set_address(maskedAddress);
bool result =
(*decodeSocket)->get_direct_mem_ptr(trans, dmi_data);
if (result)
{
// Range must contain address
assert(dmi_data.get_start_address() <= maskedAddress);
assert(dmi_data.get_end_address() >= maskedAddress);
}
// Should always succeed
sc_dt::uint64 start, end;
start = dmi_data.get_start_address();
end = dmi_data.get_end_address();
limitRange(portId, start, end);
dmi_data.set_start_address(start);
dmi_data.set_end_address(end);
return result;
}
void invalidateDMIPointers(int portId,
sc_dt::uint64 start_range,
sc_dt::uint64 end_range)
{
// FIXME: probably faster to always invalidate everything?
if ((portId >= 0) && !limitRange(portId, start_range, end_range)) {
// Range does not fall into address range of target
return;
}
for (unsigned int i = 0; i < NR_OF_INITIATORS; ++i) {
(target_socket[i])->invalidate_direct_mem_ptr(start_range, end_range);
}
}
private:
void addPendingTransaction(transaction_type& trans,
initiator_socket_type* to,
int initiatorId)
{
const ConnectionInfo info = { &target_socket[initiatorId], to };
assert(mPendingTransactions.find(&trans) == mPendingTransactions.end());
mPendingTransactions[&trans] = info;
}
private:
struct ConnectionInfo {
target_socket_type* from;
initiator_socket_type* to;
};
typedef std::map<transaction_type*, ConnectionInfo> PendingTransactions;
typedef typename PendingTransactions::iterator PendingTransactionsIterator;
typedef typename PendingTransactions::const_iterator PendingTransactionsConstIterator;
private:
PendingTransactions mPendingTransactions;
tlm_utils::peq_with_get<transaction_type> mRequestPEQ;
sc_core::sc_event mBeginRequestEvent;
sc_core::sc_event mEndRequestEvent;
tlm_utils::peq_with_get<transaction_type> mResponsePEQ;
sc_core::sc_event mBeginResponseEvent;
sc_core::sc_event mEndResponseEvent;
};
#endif
at_target_1_phase目标模块的构成
at_target_1_phase类的主要子模块成员是一个socket,并且该类继承了一个fw接口:
class at_target_1_phase
: public sc_core::sc_module
, virtual public tlm::tlm_fw_transport_if<>
{
public:
typedef tlm::tlm_generic_payload *gp_ptr;
tlm::tlm_target_socket<> m_memory_socket;
给图:
==============================targetFigure==============
======================================================
2.从构造函数看模块连接关系
从initiator_top类的构造函数可以看出来,
//=====================================================================
/// @file initiator_top.cpp
//
/// @brief Implements instantiation and interconnect of traffic_generator
/// and an initiator via sc_fifos for at_1_phase_example
//
//=====================================================================
// Original Authors:
// Bill Bunton, ESLX
// Charles Wilson, ESLX
// Anna Keist, ESLX
// Jack Donovan, ESLX
//=====================================================================
#include "initiator_top.h" // Top traffic generator & initiator
#include "reporting.h" // reporting macro helpers
static const char *filename = "initiator_top.cpp"; ///< filename for reporting
/// Constructor
initiator_top::initiator_top
( sc_core::sc_module_name name
, const unsigned int ID
, sc_dt::uint64 base_address_1
, sc_dt::uint64 base_address_2
, unsigned int active_txn_count
)
:sc_module (name) // module name for top
,initiator_socket ("at_initiator_socket") // TLM socket
,m_ID (ID) // initiator ID
,m_initiator // Init initiator
("m_initiator_1_phase"
,ID
,sc_core::sc_time(7, sc_core::SC_NS) // set initiator end rsp delay
)
,m_traffic_gen // Init traffic Generator //LL::本例中系统生成了两个
("m_traffic_gen"
,ID
,base_address_1 // first base address
,base_address_2 // second base address
,active_txn_count // Max active transactions
)
{//LL::通过initiator_top::m_request_fifo 把其两个成员traffic_gen和select_initiator的两个sc_port连接起来。
/// Bind ports to m_request_fifo between m_initiator and m_traffic_gen
m_traffic_gen.request_out_port (m_request_fifo);
m_initiator.request_in_port (m_request_fifo);
/// Bind ports to m_response_fifo between m_initiator and m_traffic_gen
m_initiator.response_out_port (m_response_fifo);
m_traffic_gen.response_in_port (m_response_fifo);
/// Bind initiator-socket to initiator-socket hierarchical connection
m_initiator.initiator_socket(initiator_socket);
}
//=====================================================================
/// @fn initiator_top::invalidate_direct_mem_ptr
///
/// @brief Unused mandatory virtual implementation
///
/// @details
/// No DMI is implemented in this example so unused
///
//=====================================================================
void
initiator_top::invalidate_direct_mem_ptr
( sc_dt::uint64 start_range
,sc_dt::uint64 end_range
)
{
std::ostringstream msg; // log message
msg.str ("");
msg << "Initiator: " << m_ID << " Not implemented";
REPORT_ERROR(filename, __FUNCTION__, msg.str());
} // end invalidate_direct_mem_ptr
//=====================================================================
/// @fn initiator_top::nb_transport_bw
//
/// @brief Unused mandatory virtual implementation
///
/// @details
/// Unused implementation from hierarchichal connectivity of
/// Initiator sockets.
///
//=====================================================================
tlm::tlm_sync_enum
initiator_top::nb_transport_bw
( tlm::tlm_generic_payload &payload
,tlm::tlm_phase &phase
,sc_core::sc_time &delta
)
{
std::ostringstream msg; // log message
msg.str ("");
msg << "Initiator: " << m_ID
<< " Not implemented, for hierachical connection of initiator socket";
REPORT_ERROR(filename, __FUNCTION__, msg.str());
return tlm::TLM_COMPLETED;
} // end nb_transport_bw
其中端口 request_in_port和response_out_port 主要用来与initiator_top内部里一个类traffic_generator进行连接。
initiator_socket则是用来与更高层的initiator_top的socket绑定,把从request_in_port中读来的数据发送出去,或接收从更高层的socket发来的数据传递给response_out_port , 高层