医学图像处理(一)使用U-Net进行MRI的肝脏分割_chaos数据集

先自我介绍一下,小编浙江大学毕业,去过华为、字节跳动等大厂,目前阿里P7

深知大多数程序员,想要提升技能,往往是自己摸索成长,但自己不成体系的自学效果低效又漫长,而且极易碰到天花板技术停滞不前!

因此收集整理了一份《2024年最新大数据全套学习资料》,初衷也很简单,就是希望能够帮助到想自学提升又不知道该从何学起的朋友。
img
img
img
img
img

既有适合小白学习的零基础资料,也有适合3年以上经验的小伙伴深入学习提升的进阶课程,涵盖了95%以上大数据知识点,真正体系化!

由于文件比较多,这里只是将部分目录截图出来,全套包含大厂面经、学习笔记、源码讲义、实战项目、大纲路线、讲解视频,并且后续会持续更新

如果你需要这些资料,可以添加V获取:vip204888 (备注大数据)
img

正文

names = os.listdir(path_2)
for i in range(len(names)):
    dicom_path = os.path.join(path_2, names[i])
    png_name = os.path.splitext(names[i])[0]
    dst_path = os.path.join('./data/train/Data\_8bit', (png_name + '.png'))
    dicom_2png(dicom_path, dst_path, 256, 256)

转换后一目了然,不需要再用MicroDicom去查看  
 ![在这里插入图片描述](https://img-blog.csdnimg.cn/20200806144947553.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzM0MDAzODc2,size_16,color_FFFFFF,t_70)


## 数据增强


我使用Augmentor工具.



导入数据增强工具

import Augmentor

确定原始图像存储路径以及掩码文件存储路径

p = Augmentor.Pipeline(“./data/train/Data”)
p.ground_truth(“./data/train/Ground”)

图像旋转: 按照概率0.8执行,最大左旋角度10,最大右旋角度10

p.rotate(probability=0.8, max_left_rotation=10, max_right_rotation=10)

图像左右互换: 按照概率0.5执行

p.flip_left_right(probability=0.5)

图像放大缩小: 按照概率0.8执行,面积为原始图0.85倍

p.zoom_random(probability=0.3, percentage_area=0.85)

最终扩充的数据样本数

p.sample(400)


当然,增强的图片还可以重新命个名,按照序号来:



import os

Data_path = “./data/train/Data_aug”
Ground_path = “./data/train/Ground_aug”

data_names = os.listdir(Data_path)
ground_names = os.listdir(Ground_path)
for i in range(len(data_names)):
used_name = os.path.join(Data_path, data_names[i])
new_name = os.path.join(Data_path, “Aug_No_%d.png” % i)
os.rename(used_name, new_name)

for i in range(len(ground_names)):
used_name = os.path.join(Ground_path, ground_names[i])
new_name = os.path.join(Ground_path, “Aug_No_%d.png” % i)
os.rename(used_name, new_name)


网络搭建和训练部分,我使用的是Python3.7 + Pytorch 1.4.0.


## U-net网络搭建


就是经典的网络结构,不过我加了尝试加了几个Dropout层.



“”"
@ filename: unet.py
“”"
import torch
from torch import nn

class DoubleConv(nn.Module):
def __init__(self, in_ch, out_ch):
super(DoubleConv, self).init()
self.conv = nn.Sequential(
nn.Conv2d(in_ch, out_ch, 3, padding=1),
nn.BatchNorm2d(out_ch),
nn.ReLU(inplace=True),
nn.Conv2d(out_ch, out_ch, 3, padding=1),
nn.BatchNorm2d(out_ch),
nn.ReLU(inplace=True)
)

def forward(self, input):
    return self.conv(input)

class Unet(nn.Module):
def __init__(self, in_ch, out_ch):
super(Unet, self).init()

    self.conv1 = DoubleConv(in_ch, 64)
    self.pool1 = nn.MaxPool2d(2)
    self.conv2 = DoubleConv(64, 128)
    self.pool2 = nn.MaxPool2d(2)
    self.conv3 = DoubleConv(128, 256)
    self.pool3 = nn.MaxPool2d(2)
    self.conv4 = DoubleConv(256, 512)
    self.pool4 = nn.MaxPool2d(2)
    self.conv5 = DoubleConv(512, 1024)
    self.up6 = nn.ConvTranspose2d(1024, 512, 2, stride=2)
    self.conv6 = DoubleConv(1024, 512)
    self.up7 = nn.ConvTranspose2d(512, 256, 2, stride=2)
    self.conv7 = DoubleConv(512, 256)
    self.up8 = nn.ConvTranspose2d(256, 128, 2, stride=2)
    self.conv8 = DoubleConv(256, 128)
    self.up9 = nn.ConvTranspose2d(128, 64, 2, stride=2)
    self.conv9 = DoubleConv(128, 64)
    self.conv10 = nn.Conv2d(64, out_ch, 1)
    self.dropout = nn.Dropout(p=0.2)

def forward(self, x):
    c1 = self.conv1(x)
    p1 = self.pool1(c1)
    p1 = self.dropout(p1)
    c2 = self.conv2(p1)
    p2 = self.pool2(c2)
    p2 = self.dropout(p2)
    c3 = self.conv3(p2)
    p3 = self.pool3(c3)
    p3 = self.dropout(p3)
    c4 = self.conv4(p3)
    p4 = self.pool4(c4)
    p4 = self.dropout(p4)
    c5 = self.conv5(p4)
    up_6 = self.up6(c5)
    merge6 = torch.cat([up_6, c4], dim=1)
    merge6 = self.dropout(merge6)
    c6 = self.conv6(merge6)
    up_7 = self.up7(c6)
    merge7 = torch.cat([up_7, c3], dim=1)
    merge7 = self.dropout(merge7)
    c7 = self.conv7(merge7)
    up_8 = self.up8(c7)
    merge8 = torch.cat([up_8, c2], dim=1)
    merge8 = self.dropout(merge8)
    c8 = self.conv8(merge8)
    up_9 = self.up9(c8)
    merge9 = torch.cat([up_9, c1], dim=1)
    merge9 = self.dropout(merge9)
    c9 = self.conv9(merge9)
    c10 = self.conv10(c9)
    # out = nn.Sigmoid()(c10)
    return c10

## 自定义Dataset


make\_dataset方法获取原始图像和分割掩膜的图像路径名,LiverDateset类继承torch的数据集类,通过make\_dataset的路径名利用PIL Image库读取文件,并进行transforms变换成归一化的Tensor数据.



“”"
@ filename: dataset.py
@ author: Peter Xiao
@ Date: 2020/5/1
@ Brief: 自定义肝脏数据集
“”"
from torch.utils.data import Dataset
import PIL.Image as Image
import os

def make_dataset(root):
# root = “./data/train”
imgs = []
ori_path = os.path.join(root, “Data”)
ground_path = os.path.join(root, “Ground”)
names = os.listdir(ori_path)
n = len(names)
for i in range(n):
img = os.path.join(ori_path, names[i])
mask = os.path.join(ground_path, names[i])
imgs.append((img, mask))
return imgs

class LiverDataset(Dataset):
def __init__(self, root, transform=None, target_transform=None):
imgs = make_dataset(root)
self.imgs = imgs
self.transform = transform
self.target_transform = target_transform

def \_\_getitem\_\_(self, index):
    x_path, y_path = self.imgs[index]
    img_x = Image.open(x_path).convert('L')
    img_y = Image.open(y_path).convert('L')
    if self.transform is not None:
        img_x = self.transform(img_x)
    if self.target_transform is not None:
        img_y = self.target_transform(img_y)
    return img_x, img_y

def \_\_len\_\_(self):
    return len(self.imgs)

## Main.py


Main文件主要有三个功能,训练、预测(包括生成可视化图像)和计算Dice系数. 主程序利用了argparse模块作命令行,可以自行修改.  
 这里提醒一点:我训练时使用的GPU是GTX1650,显存4G. batch\_size设在4刚刚好,调大了会爆显存,无法训练. 在实验室的2080Ti上用16的BT训练,占用显存为9.1G,可以根据这个比例结合自己的GPU调整Batch\_size.



“”"
@ filename: main.py
@ author: Peter Xiao
@ date: 2020/5/1
@ brief: MR肝脏分割,训练、测试和计算Dice系数
“”"
import torch
import argparse
import cv2
import os
import numpy as np
import matplotlib.pyplot as plt
from torch.utils.data import DataLoader
from torch import nn, optim
from torchvision.transforms import transforms
from unet import Unet
from denseunet import DenseUNet_65, DenseUNet_167
from dataset import LiverDataset
from tools.common_tools import transform_invert

val_interval = 1

是否使用cuda

device = torch.device(“cuda” if torch.cuda.is_available() else “cpu”)

x_transforms = transforms.Compose([
transforms.ToTensor(),
])

mask只需要转换为tensor

y_transforms = transforms.ToTensor()

train_curve = list()
valid_curve = list()

def train_model(model, criterion, optimizer, dataload, num_epochs=80):
model_path = “./model/Aug/weights_20.pth”
if os.path.exists(model_path):
model.load_state_dict(torch.load(model_path, map_location=device))
start_epoch = 20
print(‘加载成功!’)
else:
start_epoch = 0
print(‘无保存模型,将从头开始训练!’)

for epoch in range(start_epoch+1, num_epochs):
    print('Epoch {}/{}'.format(epoch, num_epochs))
    print('-' \* 10)
    dt_size = len(dataload.dataset)
    epoch_loss = 0
    step = 0
    for x, y in dataload:
        step += 1
        inputs = x.to(device)
        labels = y.to(device)
        # zero the parameter gradients
        optimizer.zero_grad()
        # forward
        outputs = model(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()
        epoch_loss += loss.item()
        train_curve.append(loss.item())
        print("%d/%d,train\_loss:%0.3f" % (step, (dt_size - 1) // dataload.batch_size + 1, loss.item()))
    print("epoch %d loss:%0.3f" % (epoch, epoch_loss/step))
    if (epoch + 1) % 20 == 0:
        torch.save(model.state_dict(), './model/Aug/weights\_%d.pth' % (epoch + 1))

    # Validate the model
    valid_dataset = LiverDataset("data/val", transform=x_transforms, target_transform=y_transforms)
    valid_loader = DataLoader(valid_dataset, batch_size=4, shuffle=True)
    if (epoch + 2) % val_interval == 0:
        loss_val = 0.
        model.eval()
        with torch.no_grad():
            step_val = 0
            for x, y in valid_loader:
                step_val += 1
                inputs = x.to(device)
                labels = y.to(device)
                outputs = model(inputs)
                loss = criterion(outputs, labels)
                loss_val += loss.item()

            valid_curve.append(loss_val)
            print("epoch %d valid\_loss:%0.3f" % (epoch, loss_val / step_val))

train_x = range(len(train_curve))
train_y = train_curve

train_iters = len(dataload)
valid_x = np.arange(1, len(
    valid_curve) + 1) \* train_iters \* val_interval  # 由于valid中记录的是EpochLoss,需要对记录点进行转换到iterations
valid_y = valid_curve

plt.plot(train_x, train_y, label='Train')
plt.plot(valid_x, valid_y, label='Valid')

plt.legend(loc='upper right')
plt.ylabel('loss value')
plt.xlabel('Iteration')
plt.show()
return model

#训练模型
def train(args):
model = Unet(1, 1).to(device)
# model = DenseUNet_65(1, 1).to(device)
batch_size = args.batch_size
criterion = nn.BCEWithLogitsLoss()
optimizer = optim.Adam(model.parameters())
liver_dataset = LiverDataset(“./data/train”, transform=x_transforms, target_transform=y_transforms)
dataloaders = DataLoader(liver_dataset, batch_size=batch_size, shuffle=True, num_workers=4)
train_model(model, criterion, optimizer, dataloaders)

#显示模型的输出结果
def test(args):
model = Unet(1, 1)
model.load_state_dict(torch.load(args.ckpt, map_location=‘cuda’))
liver_dataset = LiverDataset(“data/val”, transform=x_transforms, target_transform=y_transforms)
dataloaders = DataLoader(liver_dataset, batch_size=1)

save_root = "E:\\MyDocuments\\TorchLearing\\u\_net\_liver\_chaos\_8bit\\data\\predict\\test"

model.eval()
plt.ion()
index = 0
with torch.no_grad():
    for x, ground in dataloaders:
        y = model(x)
        x = torch.squeeze(x)
        x = x.unsqueeze(0)
        ground = torch.squeeze(ground)
        ground = ground.unsqueeze(0)
        img_ground = transform_invert(ground, y_transforms)
        img_x = transform_invert(x, x_transforms)
        img_y = torch.squeeze(y).numpy()
        # cv2.imshow('img', img\_y)
        src_path = os.path.join(save_root, "predict\_%d\_s.png" % index)
        save_path = os.path.join(save_root, "predict\_%d\_o.png" % index)
        ground_path = os.path.join(save_root, "predict\_%d\_g.png" % index)
        img_ground.save(ground_path)
        img_x.save(src_path)
        cv2.imwrite(save_path, img_y \* 255)
        index = index + 1
        # plt.imshow(img\_y)
        # plt.pause(0.5)
    # plt.show()

计算Dice系数

def dice_calc(args):
root = “E:\MyDocuments\TorchLearing\u_net_liver_chaos_8bit\data\predict\aug+drop_8bit\epoch80”
nums = len(os.listdir(root)) // 3
dice = list()
dice_mean = 0
for i in range(nums):
ground_path = os.path.join(root, “predict_%d_g.png” % i)
predict_path = os.path.join(root, “predict_%d_o.png” % i)
img_ground = cv2.imread(ground_path)
img_predict = cv2.imread(predict_path)
intersec = 0
x = 0
y = 0
for w in range(256):
for h in range(256):
intersec += img_ground.item(w, h, 1) * img_predict.item(w, h, 1) / (255 * 255)
x += img_ground.item(w, h, 1) / 255
y += img_predict.item(w, h, 1) / 255
if x + y == 0:
current_dice = 1
else:
current_dice = round(2 * intersec / (x + y), 3)
dice_mean += current_dice
dice.append(current_dice)
dice_mean /= len(dice)
print(dice)
print(round(dice_mean, 3))

if name == ‘__main__’:
#参数解析
parse = argparse.ArgumentParser()
parse.add_argument(“–action”, type=str, help=“train, test or dice”, default=“test”)
parse.add_argument(“–batch_size”, type=int, default=4)
parse.add_argument(“–ckpt”, type=str, help=“the path of model weight file”, default=“./model/Aug/weights_80.pth”)
# parse.add_argument(“–ckpt”, type=str, help=“the path of model weight file”)
args = parse.parse_args()

if args.action == "train":
    train(args)
elif args.action == "test":
    test(args)
elif args.action == "dice":
    dice_calc(args)

## 实验结果


训练速度还是很快的,GTX1650在Batch\_size为4的情况下训练20个epoch的时间在20分组以内. 20个Epoch的结果如下:横向的连续三张图分别为GroundTruth,网络预测图及原图。看起来还是不错的.  
 ![在这里插入图片描述](https://img-blog.csdnimg.cn/20200806150843179.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzM0MDAzODc2,size_16,color_FFFFFF,t_70)  
 ![在这里插入图片描述](https://img-blog.csdnimg.cn/20200806151243974.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzM0MDAzODc2,size_16,color_FFFFFF,t_70)


另外我使用了原始的16位数据训练,发现Dice并不是很好,不知道是什么原因。同时我还对训练时间、是否数据增强、是否添加dropout等进行了对比实验,Dice箱线图如下:  
 ![在这里插入图片描述](https://img-blog.csdnimg.cn/20200806151319542.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzM0MDAzODc2,size_16,color_FFFFFF,t_70)  
 Dice最佳为0.89,离CHAOS文献中2D-Unet的最好结果91%还有一些距离,不过我只使用了16个病例,而且Unet基本没有改动,这个结果还是比较正常了.


## 总结




**网上学习资料一大堆,但如果学到的知识不成体系,遇到问题时只是浅尝辄止,不再深入研究,那么很难做到真正的技术提升。**

**需要这份系统化的资料的朋友,可以添加V获取:vip204888 (备注大数据)**
![img](https://img-blog.csdnimg.cn/img_convert/cf93adf10ea31bbba11c14abaf7d6541.png)

**一个人可以走的很快,但一群人才能走的更远!不论你是正从事IT行业的老鸟或是对IT行业感兴趣的新人,都欢迎加入我们的的圈子(技术交流、学习资源、职场吐槽、大厂内推、面试辅导),让我们一起学习成长!**

e_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzM0MDAzODc2,size_16,color_FFFFFF,t_70)  
 Dice最佳为0.89,离CHAOS文献中2D-Unet的最好结果91%还有一些距离,不过我只使用了16个病例,而且Unet基本没有改动,这个结果还是比较正常了.


## 总结




**网上学习资料一大堆,但如果学到的知识不成体系,遇到问题时只是浅尝辄止,不再深入研究,那么很难做到真正的技术提升。**

**需要这份系统化的资料的朋友,可以添加V获取:vip204888 (备注大数据)**
[外链图片转存中...(img-8TvmMlTk-1713382005243)]

**一个人可以走的很快,但一群人才能走的更远!不论你是正从事IT行业的老鸟或是对IT行业感兴趣的新人,都欢迎加入我们的的圈子(技术交流、学习资源、职场吐槽、大厂内推、面试辅导),让我们一起学习成长!**

  • 10
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值