如何搭建Unet-3D网络

UNET模型是由卷积神经网络演变过来,一般的卷积神经网络将其任务集中在图像分类上,其中输入多是图像输出是标签,而在医学和其他学科中,需要输出是图像,输出也是图像。
UNET模型主要解决的就是这类问题,它能够定位和区分边界,并且输出的大小和输出的大小相同。

模型概况

# 下采样部分
def build_model(input_layer, start_neurons):
	conv1 = Conv3D(start_neurons * 1, (3, 3), activation='relu', padding="same")(input_layer)
	conv1 = Conv3D(start_neurons * 1, (3, 3), activation='relu', padding="same")(conv1)
	pool1 = MaxPool3D((2, 2))(conv1)
	pool1 = Dropout(0.25)(pool1)
	
	conv2 = Conv3D(start_neurous * 2, (3, 3), activation='relu', padding="same")(pool1)
	conv2 = Conv3D(start_neurous * 2, (3, 3), activation='relu', padding="same")(conv2)
	pool2 = MaxPool3D((2, 2))(conv2)
	pool2 = Dropout(0.5)(pool2)

	conv3 = Conv3D(start_neurous * 4, (3, 3), activation='relu', padding="same")(pool2)
	conv3 = Conv3D(start_neurous * 4, (3, 3), activation='relu', padding="same")(conv3)
	pool3 = MaxPool3D((2, 2))(conv3)
	pool3 = Dropout(0.5)(pool3)

	conv4 = Conv3D(start_neurous * 8, (3, 3), activation='relu', padding="same")(pool3)
	conv4 = Conv3D(start_neurous * 8, (3, 3), activation='relu', padding="same")(conv4)
	pool4 = MaxPool3D((2, 2))(conv4)
	pool4 = Dropout(0.5)(pool4)

# 中间部分
	convm = Conv3D(start_neurous * 16, (3, 3), activation='relu', padding="same")(pool4)
	convm = Conv3D(start_neurous * 16, (3, 3), activation='relu', padding="same")(convm)

# 上采样部分	
	deconv4 = Conv3DTranspose(start_neurons * 8, (3, 3), strides=(2, 2), padding="same")(convm)
	uconv4 = concatenate([deconv4, conv4])
	uconv4 = Dropout(0.5)(uconv4)
	uconv4 = Conv3D(start_neurons * 8, (3, 3), activation='relu', padding="same")(uconv4)
	uconv4 = Conv3D(start_neurons * 8, (3, 3), activation='relu', padding="same")(uconv4)

	deconv3 = Conv3DTranspose(start_neurons * 4, (3, 3), strides=(2, 2), padding="same")(uconv4)
	uconv3 = concatenate([deconv3, conv3])
	uconv3 = Dropout(0.5)(uconv3)
	uconv3 = Conv3D(start_neurons * 4, (3, 3), activation='relu', padding="same")(uconv3)
	uconv3 = Conv3D(start_neurons * 4, (3, 3), activation='relu', padding="same")(uconv3)

	deconv2 = Conv3DTranspose(start_neurons * 2, (3, 3), strides=(2, 2), padding="same")(uconv3)
	uconv2 = concatenate([deconv2, conv2])
	uconv2 = Dropout(0.5)(uconv2)
	uconv2 = Conv3D(start_neurons * 2, (3, 3), activation='relu', padding="same")(uconv2)
	uconv2 = Conv3D(start_neurons * 2, (3, 3), activation='relu', padding="same")(uconv2)

	deconv1 = Conv3DTranspose(start_neurons * 1, (3, 3), strides(2, 2), padding="same")(uconv2)
	uconv1 = concatenate([deconv1, conv1])
	uconv1 = Dropout(0.5)(uconv1)
	uconv1 = Conv3D(start_neurons * 1, (3, 3), acvivation='relu', padding="same")(uconv1)
	uconv1 = Conv3D(start_neurons * 1, (3, 3), activation='relu', padding="same")(uconv1)

	output_layer = Conv3D(start_neurons * 1, (1, 1), activation='relu', padding="same")(uconv1)

input_layer = Input(image_size_target, image_size_target, 1)
output_layer = build_model(input_layer, 16)

用2D网络逐行分析

用2D网络分析,只是为了方便使用原文章里的图片,3D和2D网络搭建上基本没有区别,这样也可以做到对比学习。

1. 下采样,卷积

卷积路径:

conv_layer1 -> conv_layer2 -> max_pooling -> dropout(optional)

这部分结构对应的代码是:

conv1 = Conv2D(start_neurons * 1, (3, 3), activation="relu", padding="same")(input_layer)
conv1 = Conv2D(start_neurons * 1, (3, 3), activation="relu", padding="same")(conv1)
pool1 = MaxPooling2D((2, 2))(conv1)
pool1 = Dropout(0.25)(pool1)

图中对应的结构是:
在这里插入图片描述
注意,每个进程构成两个卷积层,通道数从1变成64,因为卷积过程会增加图像的深度。红色向下的箭头是最大池化处理,它将图像大小减半(从572x572到568x568,变小是由于填充问题)padding采用的是“same”。
该过程重复三次,代表卷积深度三次:
在这里插入图片描述
具体代码:

conv2 = Conv2D(start_neurons * 2, (3, 3), activation="relu", padding="same")(pool1)
conv2 = Conv2D(start_neurons * 2, (3, 3), activation="relu", padding="same")(conv2)
pool2 = MaxPooling2D((2, 2))(conv2)
pool2 = Dropout(0.5)(pool2)


conv3 = Conv2D(start_neurons * 4, (3, 3), activation="relu", padding="same")(pool2)
conv3 = Conv2D(start_neurons * 4, (3, 3), activation="relu", padding="same")(conv3)
pool3 = MaxPooling2D((2, 2))(conv3)
pool3 = Dropout(0.5)(pool3)


conv4 = Conv2D(start_neurons * 8, (3, 3), activation="relu", padding="same")(pool3)
conv4 = Conv2D(start_neurons * 8, (3, 3), activation="relu", padding="same")(conv4)
pool4 = MaxPooling2D((2, 2))(conv4)
pool4 = Dropout(0.5)(pool4)

现在到达了UNET的最底层:

convm = Conv2D(start_neurons * 16, (3, 3), activation="relu", padding="same")(pool4)
convm = Conv2D(start_neurons * 16, (3, 3), activation="relu", padding="same")(convm)

此时图像大小已经是28x28x1024了,下采样已经结束,开始上采样。

2. 上采样,反卷积

反卷积路径:

conv_2d_transpose -> concatenate -> conv_layer1 -> conv_layer2

图中具体结构对应:
在这里插入图片描述
对应代码:

deconv4 = Conv2DTranspose(start_neurons * 8, (3, 3), strides=(2, 2), padding="same")(convm)
uconv4 = concatenate([deconv4, conv4])
uconv4 = Dropout(0.5)(uconv4)
uconv4 = Conv2D(start_neurons * 8, (3, 3), activation="relu", padding="same")(uconv4)
uconv4 = Conv2D(start_neurons * 8, (3, 3), activation="relu", padding="same")(uconv4)

这里反卷积是一种扩大图像大小的技术,也就是上采样技术。基本上是对图像进行填充,然后进行卷积运算。
反卷积后,图像从28x28x1024放到到56x56x512,然后该图像与来自卷积操作中的相应位置的图像进行张量拼接,生成一个大小为56x56x1024的图像,主要是为了更精确的预测,与下采样是的图像进行结合。第四行和第五行是另外加了两个卷积层。
与下采样相同, 这个过程重复3遍:

deconv3 = Conv2DTranspose(start_neurons * 4, (3, 3), strides=(2, 2), padding="same")(uconv4)
uconv3 = concatenate([deconv3, conv3])
uconv3 = Dropout(0.5)(uconv3)
uconv3 = Conv2D(start_neurons * 4, (3, 3), activation="relu", padding="same")(uconv3)
uconv3 = Conv2D(start_neurons * 4, (3, 3), activation="relu", padding="same")(uconv3)

deconv2 = Conv2DTranspose(start_neurons * 2, (3, 3), strides=(2, 2), padding="same")(uconv3)
uconv2 = concatenate([deconv2, conv2])
uconv2 = Dropout(0.5)(uconv2)
uconv2 = Conv2D(start_neurons * 2, (3, 3), activation="relu", padding="same")(uconv2)
uconv2 = Conv2D(start_neurons * 2, (3, 3), activation="relu", padding="same")(uconv2)

deconv1 = Conv2DTranspose(start_neurons * 1, (3, 3), strides=(2, 2), padding="same")(uconv2)
uconv1 = concatenate([deconv1, conv1])
uconv1 = Dropout(0.5)(uconv1)
uconv1 = Conv2D(start_neurons * 1, (3, 3), activation="relu", padding="same")(uconv1)
uconv1 = Conv2D(start_neurons * 1, (3, 3), activation="relu", padding="same")(uconv1)

现在已经走到了最高层
在这里插入图片描述
输出:

output_layer = Conv2D(1, (1,1), padding="same", activation="sigmoid")(uconv1)
  • 8
    点赞
  • 35
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 4
    评论
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

星空下0516

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值