基于上下文感知融合Transformer和V-Net的三维左心房半监督分割网络Context-aware network fusing transformer and V-Net for semi-supervised
研究背景及动机
背景:
1 对于医学专家而言,绘制可靠标注的工作繁琐且耗时,而且由于专家的主观性,人工标注也可能造成一定的分割差异
2 医疗机构中通常存在大量的未标记数据,充分发挥未标记数据的作用
动机:
1 3D医学图像包含一组切片:模型要学习的不仅是一个切片中的上下文信息,还包括不同切片之间的上下文信息(不同组织,不同部位的关联信息)
2 现有方法很少同时利用两种信息
主要贡献
1 将Transformer融合到VNet中
2 设计带有注意机制的鉴别器,引入强形状和位置先验信息
3 显著提高了LA分割的准确性和鲁棒性,但也存在参数过多等潜在问题
方法
在VNet瓶颈使用Transformer提取全局上下文信息
def TransformerLayer(self, features):
x5 = features[4]
embedding_output = self.embeddings(x5)
transformer_output, attn_weights = self.transformer(embedding_output)
detransformer_output = self.detransformer(transformer_output)
features[4] = detransformer_output
return features
将编码器最后一层的输出x5进行位置编码,编码后的输出经过12层的Transformer再经过编码器得到分割结果。
DAM(discriminator with attention mechanism)带有注意力机制的鉴别器
由5个卷积层和一个MLP组成,在原有的5层卷积之后加入一个改进的SENet来提高鉴别器的性能。
class FC3DDiscriminator(nn.Module):
def __init__(self, num_classes, ndf=64, n_channel=1):
super(FC3DDiscriminator, self).__init__()
# downsample 16
self.conv0 = nn.Conv3d(num_classes, ndf, kernel_size=4, stride=2, padding=1)
self.conv1 = nn.Conv3d(n_channel, ndf, kernel_size=4, stride=2, padding=1)
self.conv2 = nn.Conv3d(ndf, ndf*2, kernel_size=4, stride=2, padding=1)
self.conv3 = nn.Conv3d(ndf*2, ndf*4, kernel_size=4, stride=2, padding=1)
self.conv4 = nn.Conv3d(ndf*4, ndf*8, kernel_size=4, stride=2, padding=1)
self.avgpool = nn.AvgPool3d((7, 7, 5))
self.classifier = nn.Linear(ndf*8, 2)
self.leaky_relu = nn.LeakyReLU(negative_slope=0.2, inplace=True)
self.dropout = nn.Dropout3d(0.5)
self.Softmax = nn.Softmax()
def forward(self, map, image):
batch_size = map.shape[0]
map_feature = self.conv0(map)
image_feature = self.conv1(image)
x = torch.add(map_feature, image_feature)
x = self.leaky_relu(x)
x = self.dropout(x)
x = self.conv2(x)
x = self.leaky_relu(x)
x = self.dropout(x)
x = self.conv3(x)
x = self.leaky_relu(x)
x = self.dropout(x)
x = self.conv4(x)
x = self.leaky_relu(x)
se = SEAttention(channel=512, reduction=8).cuda()
# output = se(x)
x = se(x)
x = self.avgpool(x)
x = x.view(batch_size, -1)
x = self.classifier(x)
x = x.reshape((batch_size, 2))
# x = self.Softmax(x)
return x
本文在复现的时候使用SE注意力的效果反而比不使用的效果要差
x9 = self.block_nine(x8_up)
if self.has_dropout:
x9 = self.dropout(x9)
out = self.out_conv(x9)
out_tanh = self.tanh(out)
out_seg = self.out_conv2(x9)
在编码器的最一层使用tanh激活函数将将元素调整到区间(-1,1)内
for i_batch, sampled_batch in enumerate(trainloader):
time2 = time.time()
volume_batch, label_batch = sampled_batch['image'], sampled_batch['label']
volume_batch, label_batch = volume_batch.cuda(), label_batch.cuda()
# Generate Discriminator target based on sampler
Dtarget = torch.tensor([1, 1, 0, 0]).cuda()
model.train()
D.eval()
outputs_tanh, outputs = model(volume_batch)
outputs_soft = torch.sigmoid(outputs)
batchsize设置为4,每次输入两张带有标签和两张不带有标签的图片,Dtarget是判别器的标签,用来区分DAM属于有标签的图像还是无标签的图像
学习策略
损失函数
有监督的损失=分割损失+αLSM损失
## calculate the loss
with torch.no_grad():
gt_dis = compute_sdf(label_batch[:].cpu().numpy(), outputs[:labeled_bs, 0, ...].shape)
gt_dis = torch.from_numpy(gt_dis).float().cuda()
loss_sdf = mse_loss(outputs_tanh[:labeled_bs, 0, ...], gt_dis)
loss_seg = ce_loss(outputs[:labeled_bs, 0, ...], label_batch[:labeled_bs].float())
loss_seg_dice = losses.dice_loss(outputs_soft[:labeled_bs, 0, :, :, :], label_batch[:labeled_bs] == 1)
consistency_weight = get_current_consistency_weight(iter_num//150)
supervised_loss = loss_seg_dice + args.beta * loss_sdf
Doutputs = D(outputs_tanh[labeled_bs:], volume_batch[labeled_bs:])
# G want D to misclassify unlabel data to label data.
loss_adv = F.cross_entropy(Doutputs, (Dtarget[:labeled_bs]).long())
loss = supervised_loss + consistency_weight*loss_adv
optimizer.zero_grad()
loss.backward()
optimizer.step()
dc = metrics.dice(torch.argmax(outputs_soft[:labeled_bs], dim=1), label_batch[:labeled_bs])
有关鉴别器可参考这篇文章有具体讲解:
半监督3D医学图像分割(四):SASSNet
总结
在复现了好几个半监督网络,这篇的训练过程较快,在2、3个小时就可以训练完。各项评价指标与文中给的10%的标签结果相差0.1~0.2左右。