医学图像分割前进之路------UNet

由于最近要着手CT图像的分割,因此学习一些分割网络。本次学习的网络的是Unet网络,结合论文《U-Net: Convolutional Networks for Biomedical
Image Segmentation》和代码仓库对学习内容进行总结:milesial/Pytorch-UNet: PyTorch implementation of the U-Net for image semantic segmentation with high quality images (github.com)

  • 文章主要从网络结构、数据集加载、优化器和损失函数选择来分析这个代码仓库

一、网络结构

Unet整体网络结构十分简洁,在FPN网络结构上进行改进也如同其名呈现一个U型。网络结构涉及双卷积模块、下采样模型和上采样模块。整体网络结构如图所示。

其中,双卷积模块就如蓝色箭头所示,是一个3x3Conv+BN+ReLu结构。下采样结构主要是一个MaxPooling层+双卷积模块。上采样模块是一个Upsample模块+双卷积模块。灰色箭头可以看成一个特征加强结构,由于箭头起始端和终止端的图像大小不一致,因此做了裁剪,并两个模块进行Cat处理,增加网络的特征提取。最后输出一个和原图大小一致的mask预测结果,输出通道数=分类类别数。具体代码如下:

class DoubleConv(nn.Module):
    """(convolution => [BN] => ReLU) * 2"""
    # Stage1,两个卷积层,原论文中可能没有用到BatchNorm2d,没有padding=1
    def __init__(self, in_channels, out_channels, mid_channels=None):
        super().__init__()
        if not mid_channels:
            mid_channels = out_channels
        self.double_conv = nn.Sequential(
            nn.Conv2d(in_channels, mid_channels, kernel_size=3, padding=1, bias=False),
            nn.BatchNorm2d(mid_channels),
            nn.ReLU(inplace=True),
            nn.Conv2d(mid_channels, out_channels, kernel_size=3, padding=1, bias=False),
            nn.BatchNorm2d(out_channels),
            nn.ReLU(inplace=True)
        )

    def forward(self, x):
        return self.double_conv(x)

class Down(nn.Module):
    """Downscaling with maxpool then double conv"""
    # Stage2
    # 下采样maxpooling
    def __init__(self, in_channels, out_channels):
        super().__init__()
        self.maxpool_conv = nn.Sequential(
            nn.MaxPool2d(2),
            DoubleConv(in_channels, out_channels)
        )

    def forward(self, x):
        return self.maxpool_conv(x)


class Up(nn.Module):
    """Upscaling then double conv"""

    def __init__(self, in_channels, out_channels, bilinear=True):
        super().__init__()

        # if bilinear, use the normal convolutions to reduce the number of channels
        if bilinear:
            self.up = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True)
            self.conv = DoubleConv(in_channels, out_channels, in_channels // 2)
        else:
            self.up = nn.ConvTranspose2d(in_channels, in_channels // 2, kernel_size=2, stride=2)
            self.conv = DoubleConv(in_channels, out_channels)

    def forward(self, x1, x2):
        x1 = self.up(x1)
        # input is CHW
        diffY = x2.size()[2] - x1.size()[2]
        diffX = x2.size()[3] - x1.size()[3]

        x1 = F.pad(x1, [diffX // 2, diffX - diffX // 2,
                        diffY // 2, diffY - diffY // 2])
        x = torch.cat([x2, x1], dim=1)
        return self.conv(x)

class OutConv(nn.Module):
    def __init__(self, in_channels, out_channels):
        super(OutConv, self).__init__()
        self.conv = nn.Conv2d(in_channels, out_channels, kernel_size=1)

    def forward(self, x):
        return self.conv(x)

整个结构如下代码所示:

class UNet(nn.Module):
    def __init__(self, n_channels, n_classes, bilinear=False):
        super(UNet, self).__init__()
        self.n_channels = n_channels
        self.n_classes = n_classes
        self.bilinear = bilinear

        self.inc = (DoubleConv(n_channels, 64))
        self.down1 = (Down(64, 128))
        self.down2 = (Down(128, 256))
        self.down3 = (Down(256, 512))
        factor = 2 if bilinear else 1
        self.down4 = (Down(512, 1024 // factor))
        self.up1 = (Up(1024, 512 // factor, bilinear))
        self.up2 = (Up(512, 256 // factor, bilinear))
        self.up3 = (Up(256, 128 // factor, bilinear))
        self.up4 = (Up(128, 64, bilinear))
        self.outc = (OutConv(64, n_classes))

    def forward(self, x):
        x1 = self.inc(x)
        x2 = self.down1(x1)
        x3 = self.down2(x2)
        x4 = self.down3(x3)
        x5 = self.down4(x4)

        x = self.up1(x5, x4)
        x = self.up2(x, x3)
        x = self.up3(x, x2)
        x = self.up4(x, x1)
        logits = self.outc(x)
        return logits

    def use_checkpointing(self):
        self.inc = torch.utils.checkpoint(self.inc)
        self.down1 = torch.utils.checkpoint(self.down1)
        self.down2 = torch.utils.checkpoint(self.down2)
        self.down3 = torch.utils.checkpoint(self.down3)
        self.down4 = torch.utils.checkpoint(self.down4)
        self.up1 = torch.utils.checkpoint(self.up1)
        self.up2 = torch.utils.checkpoint(self.up2)
        self.up3 = torch.utils.checkpoint(self.up3)
        self.up4 = torch.utils.checkpoint(self.up4)
        self.outc = torch.utils.checkpoint(self.outc)

如果想要仔细了解网络每层的输出结果以及网络参数等可以通过Summary函数进行查看:

    # 查看网络结构,与论文不同的是使用了padding以及bn
    import torch
    from torchsummary import summary
    device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
    model = UNet(n_channels=3, n_classes=2, bilinear=False)
    model.to(device)
    summary(model=model, input_size=(3, 512, 512))

以上就是整个Unet的网络结果,与论文原文不同的地方在于该仓库使用了Padding以及BN层,导致每层Conv输出的图像与论文对应不上。

二、数据集加载

Unet网络的数据集主要由两部分组成:1、原始图像;2、标注好的mask图像。具体多类标签制作由于本人还未自己动手制作,后续会进行更新。。。

该仓库中主要通过以下函数加载原图和mask图像。

class BasicDataset(Dataset):
    def __init__(self, images_dir: str, mask_dir: str, scale: float = 1.0, mask_suffix: str = ''):
        self.images_dir = Path(images_dir)
        self.mask_dir = Path(mask_dir)
        assert 0 < scale <= 1, 'Scale must be between 0 and 1'
        self.scale = scale
        self.mask_suffix = mask_suffix

        self.ids = [splitext(file)[0] for file in listdir(images_dir) if isfile(join(images_dir, file)) and not file.startswith('.')]
        if not self.ids:
            raise RuntimeError(f'No input file found in {images_dir}, make sure you put your images there')

        logging.info(f'Creating dataset with {len(self.ids)} examples')
        logging.info('Scanning mask files to determine unique values')
        with Pool() as p:
            unique = list(tqdm(
                p.imap(partial(unique_mask_values, mask_dir=self.mask_dir, mask_suffix=self.mask_suffix), self.ids),
                total=len(self.ids)
            ))

        self.mask_values = list(sorted(np.unique(np.concatenate(unique), axis=0).tolist()))
        logging.info(f'Unique mask values: {self.mask_values}')

    def __len__(self):
        return len(self.ids)

    @staticmethod
    def preprocess(mask_values, pil_img, scale, is_mask):
        w, h = pil_img.size
        newW, newH = int(scale * w), int(scale * h)
        assert newW > 0 and newH > 0, 'Scale is too small, resized images would have no pixel'
        # 使用对掩膜图像使用最邻近插值法进行缩放,对原图使用双三次插值法进行缩放
        pil_img = pil_img.resize((newW, newH), resample=Image.NEAREST if is_mask else Image.BICUBIC)
        img = np.asarray(pil_img)

        if is_mask:
            """
            遍历 mask_values 中的每个值(可能是掩模中的不同类别标签),
            并将原始图像 img 中相应像素值等于该标签的位置设置为对应的类别标签。
            """
            mask = np.zeros((newH, newW), dtype=np.int64)
            for i, v in enumerate(mask_values):
                if img.ndim == 2:
                    mask[img == v] = i
                else:
                    mask[(img == v).all(-1)] = i

            return mask

        else:
            # 检查图像的维度,如果是单通道图像,则添加一个新的维度,变成三维数组
            if img.ndim == 2:
                img = img[np.newaxis, ...]
            else:
                img = img.transpose((2, 0, 1))
            # 如果像素值范围大于1,则将像素值标准化为0-1
            if (img > 1).any():
                img = img / 255.0

            return img

    def __getitem__(self, idx):
        """通过索引方式访问该数据"""
        name = self.ids[idx]
        mask_file = list(self.mask_dir.glob(name + self.mask_suffix + '.*'))
        img_file = list(self.images_dir.glob(name + '.*'))

        assert len(img_file) == 1, f'Either no image or multiple images found for the ID {name}: {img_file}'
        assert len(mask_file) == 1, f'Either no mask or multiple masks found for the ID {name}: {mask_file}'
        mask = load_image(mask_file[0])
        img = load_image(img_file[0])

        assert img.size == mask.size, \
            f'Image and mask {name} should be the same size, but are {img.size} and {mask.size}'

        img = self.preprocess(self.mask_values, img, self.scale, is_mask=False)
        mask = self.preprocess(self.mask_values, mask, self.scale, is_mask=True)

        return {
            'image': torch.as_tensor(img.copy()).float().contiguous(),
            'mask': torch.as_tensor(mask.copy()).long().contiguous()
        }

首先对原图和mask图像进行缩放,对mask图像使用最邻近插值法进行缩放,对原图使用双三次插值法进行缩放 ,遍历mask图像中将原始图像"img"中相应像素位置等于该标签位置设置为对应的类别标签。即根据输入图像的像素值,将不同的对象或区域分配给不同的整数标签。然后就是通过DataLoader创建训练集和验证集

三、损失函数

Unet的损失函数主要由二元交叉熵/交叉熵损失函数和Dice损失函数构成。交叉熵损失函数用于计算分类损失,Dice损失函数用于度量模型预测的分割质量。具体代码如下:

                    if model.n_classes == 1:   # 单分类
                        loss = criterion(masks_pred.squeeze(1), true_masks.float())
                        # Dice系数是一种集合相似度度量函数,通常用于计算两个样本的相似度,取值范围在[0,1]
                        loss += dice_loss(F.sigmoid(masks_pred.squeeze(1)), true_masks.float(), multiclass=False)  # 损失函数
                    else:  # 多分类
                        loss = criterion(masks_pred, true_masks)
                        loss += dice_loss(
                            # 将每个位置的类别的得分用概率表示
                            F.softmax(masks_pred, dim=1).float(),
                            F.one_hot(true_masks, model.n_classes).permute(0, 3, 1, 2).float(),
                            multiclass=True
                        )

其中多分类中,首先将预测结果每个像素位置的分类得分用概率表示,真实值用独热码表示,通过公式计算Dice。其值越大代表两个样本相似程度越高,预测越准确。:

def multiclass_dice_coeff(input: Tensor, target: Tensor, reduce_batch_first: bool = False, epsilon: float = 1e-6):
    # Average of Dice coefficient for all classes
    return dice_coeff(input.flatten(0, 1), target.flatten(0, 1), reduce_batch_first, epsilon)


def dice_loss(input: Tensor, target: Tensor, multiclass: bool = False):
    # Dice loss (objective to minimize) between 0 and 1
    fn = multiclass_dice_coeff if multiclass else dice_coeff

    # 预测结果和真实结果的交乘上2,除上预测结果加上真实结果
    return 1 - fn(input, target, reduce_batch_first=True)

 

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值