唯一不足的是:不知道保存的模型从哪里来?????
一、提出的算法框架
如果原始图像是PET或SPECT模式,则将其转换为YCbCr数据,包括Y1, Cb和Cr。MRI(或CT)图像是灰度图像,我们将其表示为Y2(或表示为Y1)。然后,我们把Y1和Y2放到我们的网络中,最后就可以得到融合的数据Y。如果原始数据包含PET或SPECT图像,我们将Y、Cb和Cr数据转换为RGB通道,得到融合图像。当网络的训练过程完成后,融合过程将自动运行,无需额外的参数调整。
二、提出的新网络:MSRPAN structure.
两个网络的结合: Residual attention network.+Pyramid attention network.
Fig. 2. Residual attention network. (残差网络和金字塔注意力网络)
三、算法原理图
(1)特征提取网络:
前半部分:
class FeatureExtraction(nn.Module):
def __init__(self, level):
super(FeatureExtraction, self).__init__()
self.level = level
self.conv0 = nn.Conv2d(1, 64, (1, 1), (1, 1), (0, 0))
self.up = nn.Upsample(scale_factor=2, mode='nearest')
self.down = nn.AvgPool2d(2, 2)
self.lu = nn.ReLU()
self.block = block()
def forward(self, x):
tem = self.conv0(x)
a = torch.mul(self.lu(self.up(self.down(x))), tem) + x
tem = self.block(tem)
tem = self.block(tem)
tem = self.block(tem)
return torch.mul(self.lu(self.up(self.down(a))), tem) + a
后半部分:
class block(nn.Module):
def __init__(self):
super(block, self).__init__()
self.conv0 = nn.Conv2d(64, 64, (1, 1), (1, 1), (0, 0))
self.conv1 = nn.Conv2d(64, 64, (3, 3), (1, 1), (1, 1))
self.up = nn.Upsample(scale_factor=2, mode='nearest')
self.down = nn.AvgPool2d(2, 2)
self.lu = nn.ReLU()
self.norm = nn.BatchNorm2d(64)
def forward(self, x):
p0 = self.down(x)
put0 = self.conv1(p0) + self.conv1(self.conv1(p0)) + self.conv1(self.conv1(self.conv1(p0)))
out0 = torch.mul(self.lu(self.up(self.down(p0))), put0) + p0
p1 = self.down(p0)
put1 = self.conv1(p1) + self.conv1(self.conv1(p1)) + self.conv1(self.conv1(self.conv1(p1)))
out1 = torch.mul(self.lu(self.up(self.down(p1))), put1) + p1
p2 = self.down(p1)
put2 = self.conv1(p2) + self.conv1(self.conv1(p2)) + self.conv1(self.conv1(self.conv1(p2)))
out2 = torch.mul(self.lu(self.up(self.down(p2))), put2) + p2
out2 = self.up(out2)
out1 = out1 + out2
out1 = self.up(out1)
out0 = out0 + out1
out0 = self.up(out0)
out = torch.mul(self.lu(self.up(self.down(x))), out0) + x
out = self.norm(out)
return out
(2)特征融合
(3)特征重建
四、融合的图像
相加融合策略
平均融合策略