Implement NoC by SystemC HW3

Machine Learning Intelligent Chip Design

Description

NoC (Network-on-Chip) is a promising architecture that can help overcome
communication bottlenecks and performance limitations in modern computer
systems. It decouples computing resources from communication resources, allowing
for large-scale parallel processing and highly flexible communication channel
configurations that can be optimized based on specific application requirements.
Additionally, NoC is highly fault-tolerant and scalable, providing a powerful foundation
for future integrated circuit and system architectures. 

SystemC code help,  wechat  cstutorcs

Implementation Details

In HW3, you are required to implement a 4x4 mesh-based NoC architecture as
shown in Figure 1. The system architecture includes the following two types of
modules:

  • Router: The routers will be responsible for routing flits between different
    components within the network.
  • Core: Each router will be connected to a core module, which includes the
    Processing Element (PE) and the Network Interface (NI). The PE generates data
    packets, while the NI manages communication between the PE and the router.
Figure1. 4x4 mesh-based NoC architecture

To simplify the complexity of system design for this assignment, TA will provide a
pre-written PE. The PE will be encapsulated within the core module and mainly
consists of three functions:

  • void init(int pe_id)
    You need to call this function at the beginning of the simulation. The pe_id is
    numbered sequentially, starting from 0 in the upper-left corner, as shown in
    Figure 1.
  • Packet get_packet()*
    Each time you call this function, you can obtain a send packet. If the PE has no
    more packets to send, this function will return nullptr.
    The definition of a packet is shown in Figure 2, each packet contains a
    source_id and a dest_id, along with a floating-point vector datas. The length
    of the vector in each packet is different.
Figure 2. Packet structure
  • void check_packet(Packet p)*
    When all flits of a packet are received, you need to pack these flits into a packet
    and send it to the PE by calling this function. The PE will verify whether the
    packet is correct. When all PEs receive the correct packets, the simulation will

stop immediately and display the following screen.

Figure 3. Screenshot of successful simulation
Please take a screenshot and place it in your report, ensuring that your
workstation account is in the picture.

Additionally, TA also provides the port definitions of Core and Router modules
(Figure 4). You need to connect the core to the router, and the router to the top,

bottom, left, and right routers, as shown in the architecture in Figure 1. As a

reminder, since the size of each flit is limited to 34 bits, the packet should be
decomposed before it is sent to the router.

Figure 4. Port definitions of Core module and Router module

Figure 5 is an example of flit format definitions, the first two bits are used to identify
the header, body or tail flit. You can reference the definition example or customize
the flit format and even modify the port definition. Please explain your design
considerations (such as latency, bandwidth, complexity, etc.) in detail in the report.

Figure 5. Example of flit format definitions

In the main function, there are three separate parts. The signals declaration, modules
declaration and modules connection. You can reference Figure 4 to declare all the
signal you need in the main function and interconnect these routers and cores to
construct your network.

For the pattern files, the data format is “TO <dest_id> ” or
“FROM <source_id> ”. Each PE will read the corresponding file
in the pattern folder according to its id. You don't need to process them yourself, but
understanding them will help you debug.

Implement Notes

A key aspect of the implementation will be the choice of routing policy employed by
the routers (e.g., XY routing, west-first adaptive routing, etc.). This policy will
determine how data packets are transmitted through the network and affects
simulation time.

It is important to note that in this assignment, only sc_in and sc_out can be used for
ports. Channels and interfaces that were utilized in HW2 are not allowed.
Additionally, using pointer in port definition is also forbidden as it doesn't make sense
in hardware design.

Submission Guidelines

  • Please compress a folder named HW_<studend-ID> into a zip file with
    the same name and upload it to E3.
  • The folder should include:
    o Report (Name: HW_.pdf)
    o Codes
    o Makefile
    o pattern folder
  • Example:
  • Ensure that your code is well-commented and organized for clarity and
    understanding.
  • Plagiarism is forbidden, otherwise you will get 0 point!!!

Deliverables

  • SystemC Implementation:
    Use SystemC to implement the 4x4 mesh-based NoC architecture.
  • Report:
    A brief report document containing
    o Simulation results with your workstation account.
    o How do you design the router and NI? What routing algorithm do you use?
    What is the depth of the buffer? Do you use virtual channels?
    o Your implementation approach, challenges faced, and any observations or
    insights gained during the implementation and simulation process.
  • 15
    点赞
  • 16
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Here is an example implementation of a linear regression model using PyTorch and Autograd for optimization: ```python import torch import numpy as np # Generate some random data np.random.seed(42) x = np.random.rand(100, 1) y = 2 + 3 * x + 0.1 * np.random.randn(100, 1) # Convert data to PyTorch tensors x_tensor = torch.from_numpy(x).float() y_tensor = torch.from_numpy(y).float() # Define the model class LinearRegression(torch.nn.Module): def __init__(self): super(LinearRegression, self).__init__() self.linear = torch.nn.Linear(1, 1) def forward(self, x): return self.linear(x) model = LinearRegression() # Define the loss function criterion = torch.nn.MSELoss() # Define the optimizer optimizer = torch.optim.SGD(model.parameters(), lr=0.01) # Train the model num_epochs = 1000 for epoch in range(num_epochs): # Forward pass y_pred = model(x_tensor) loss = criterion(y_pred, y_tensor) # Backward pass and optimization optimizer.zero_grad() loss.backward() optimizer.step() # Print progress if (epoch+1) % 100 == 0: print('Epoch [{}/{}], Loss: {:.4f}'.format(epoch+1, num_epochs, loss.item())) # Print the learned parameters w, b = model.parameters() print('w =', w.item()) print('b =', b.item()) ``` In this example, we define a linear regression model as a subclass of `torch.nn.Module`, with a single linear layer. We use the mean squared error loss function and stochastic gradient descent optimizer to train the model on the randomly generated data. The model parameters are learned through backpropagation using the `backward()` method, and are optimized using the `step()` method of the optimizer. After training, we print the learned values of the slope and intercept parameters.

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值