ResUnet实现(tensorflow2)

  我们将在本篇博客实现Resunet。
  首先上Unet图
在这里插入图片描述
然后结合Unet和Resnet,就是一个新的网络。

def bn_act(x, act=True):
    'batch normalization layer with an optinal activation layer'
    x = tf.keras.layers.BatchNormalization()(x)
    if act == True:
        x = tf.keras.layers.Activation('relu')(x)
    return x

以下方框代表数据,圆形代表方法,虚线代表判断
bn_act方法图

if
x
BN
relu
z
def conv_block(x, filters, kernel_size=3, padding='same', strides=1):
    'convolutional layer which always uses the batch normalization layer'
    conv = bn_act(x)
    conv = Conv2D(filters, kernel_size, padding=padding, strides=strides)(conv)
    return conv

conv_block方法图

x
bn_act
conv2d
z

stem方法图

def stem(x, filters, kernel_size=3, padding='same', strides=1):
    conv = Conv2D(filters, kernel_size, padding=padding, strides=strides)(x)
    conv = conv_block(conv, filters, kernel_size, padding, strides)
    shortcut = Conv2D(filters, kernel_size=1, padding=padding, strides=strides)(x)
    shortcut = bn_act(shortcut, act=False)
    output = Add()([conv, shortcut])
    return output
x
conv2d
conv_block
+
conv2d
bn_act
output

residual_block方法图

 def residual_block(x, filters, kernel_size=3, padding='same', strides=1):
    res = conv_block(x, filters, k_size, padding, strides)
    res = conv_block(res, filters, k_size, padding, 1)
    shortcut = Conv2D(filters, kernel_size, padding=padding, strides=strides)(x)
    shortcut = bn_act(shortcut, act=False)
    output = Add()([shortcut, res])
    return output
x
conv_block
conv_block
conv2d
bn_act
+
output
def upsample_concat_block(x, xskip):
    u = UpSampling2D((2,2))(x)
    c = Concatenate()([u, xskip])
    return c

ResUNet方法图

 def ResUNet(img_h, img_w):
    f = [16, 32, 64, 128, 256]
    inputs = Input((img_h, img_w, 1))
    
    ## Encoder
    e0 = inputs
    e1 = stem(e0, f[0])
    e2 = residual_block(e1, f[1], strides=2)
    e3 = residual_block(e2, f[2], strides=2)
    e4 = residual_block(e3, f[3], strides=2)
    e5 = residual_block(e4, f[4], strides=2)
    
    ## Bridge
    b0 = conv_block(e5, f[4], strides=1)
    b1 = conv_block(b0, f[4], strides=1)
    
    ## Decoder
    u1 = upsample_concat_block(b1, e4)
    d1 = residual_block(u1, f[4])
    
    u2 = upsample_concat_block(d1, e3)
    d2 = residual_block(u2, f[3])
    
    u3 = upsample_concat_block(d2, e2)
    d3 = residual_block(u3, f[2])
    
    u4 = upsample_concat_block(d3, e1)
    d4 = residual_block(u4, f[1])
    
    outputs = tf.keras.layers.Conv2D(4, (1, 1), padding="same", activation="sigmoid")(d4)
    model = tf.keras.models.Model(inputs, outputs)
    return model

  这个图就不画了。
可以看下这个模型的概括。

__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            [(None, 256, 800, 1) 0                                            
__________________________________________________________________________________________________
conv2d_167 (Conv2D)             (None, 256, 800, 16) 160         input_1[0][0]                    
__________________________________________________________________________________________________
batch_normalization_75 (BatchNo (None, 256, 800, 16) 64          conv2d_167[0][0]                 
__________________________________________________________________________________________________
activation_74 (Activation)      (None, 256, 800, 16) 0           batch_normalization_75[0][0]     
__________________________________________________________________________________________________
conv2d_169 (Conv2D)             (None, 256, 800, 16) 32          input_1[0][0]                    
__________________________________________________________________________________________________
conv2d_168 (Conv2D)             (None, 256, 800, 16) 2320        activation_74[0][0]              
__________________________________________________________________________________________________
batch_normalization_76 (BatchNo (None, 256, 800, 16) 64          conv2d_169[0][0]                 
__________________________________________________________________________________________________
add (Add)                       (None, 256, 800, 16) 0           conv2d_168[0][0]                 
                                                                 batch_normalization_76[0][0]     
__________________________________________________________________________________________________
batch_normalization_77 (BatchNo (None, 256, 800, 16) 64          add[0][0]                        
__________________________________________________________________________________________________
activation_75 (Activation)      (None, 256, 800, 16) 0           batch_normalization_77[0][0]     
__________________________________________________________________________________________________
conv2d_170 (Conv2D)             (None, 128, 400, 32) 4640        activation_75[0][0]              
__________________________________________________________________________________________________
batch_normalization_78 (BatchNo (None, 128, 400, 32) 128         conv2d_170[0][0]                 
__________________________________________________________________________________________________
conv2d_172 (Conv2D)             (None, 128, 400, 32) 4640        add[0][0]                        
__________________________________________________________________________________________________
activation_76 (Activation)      (None, 128, 400, 32) 0           batch_normalization_78[0][0]     
__________________________________________________________________________________________________
batch_normalization_79 (BatchNo (None, 128, 400, 32) 128         conv2d_172[0][0]                 
__________________________________________________________________________________________________
conv2d_171 (Conv2D)             (None, 128, 400, 32) 9248        activation_76[0][0]              
__________________________________________________________________________________________________
add_1 (Add)                     (None, 128, 400, 32) 0           batch_normalization_79[0][0]     
                                                                 conv2d_171[0][0]                 
__________________________________________________________________________________________________
batch_normalization_80 (BatchNo (None, 128, 400, 32) 128         add_1[0][0]                      
__________________________________________________________________________________________________
activation_77 (Activation)      (None, 128, 400, 32) 0           batch_normalization_80[0][0]     
__________________________________________________________________________________________________
conv2d_173 (Conv2D)             (None, 64, 200, 64)  18496       activation_77[0][0]              
__________________________________________________________________________________________________
batch_normalization_81 (BatchNo (None, 64, 200, 64)  256         conv2d_173[0][0]                 
__________________________________________________________________________________________________
conv2d_175 (Conv2D)             (None, 64, 200, 64)  18496       add_1[0][0]                      
__________________________________________________________________________________________________
activation_78 (Activation)      (None, 64, 200, 64)  0           batch_normalization_81[0][0]     
__________________________________________________________________________________________________
batch_normalization_82 (BatchNo (None, 64, 200, 64)  256         conv2d_175[0][0]                 
__________________________________________________________________________________________________
conv2d_174 (Conv2D)             (None, 64, 200, 64)  36928       activation_78[0][0]              
__________________________________________________________________________________________________
add_2 (Add)                     (None, 64, 200, 64)  0           batch_normalization_82[0][0]     
                                                                 conv2d_174[0][0]                 
__________________________________________________________________________________________________
batch_normalization_83 (BatchNo (None, 64, 200, 64)  256         add_2[0][0]                      
__________________________________________________________________________________________________
activation_79 (Activation)      (None, 64, 200, 64)  0           batch_normalization_83[0][0]     
__________________________________________________________________________________________________
conv2d_176 (Conv2D)             (None, 32, 100, 128) 73856       activation_79[0][0]              
__________________________________________________________________________________________________
batch_normalization_84 (BatchNo (None, 32, 100, 128) 512         conv2d_176[0][0]                 
__________________________________________________________________________________________________
conv2d_178 (Conv2D)             (None, 32, 100, 128) 73856       add_2[0][0]                      
__________________________________________________________________________________________________
activation_80 (Activation)      (None, 32, 100, 128) 0           batch_normalization_84[0][0]     
__________________________________________________________________________________________________
batch_normalization_85 (BatchNo (None, 32, 100, 128) 512         conv2d_178[0][0]                 
__________________________________________________________________________________________________
conv2d_177 (Conv2D)             (None, 32, 100, 128) 147584      activation_80[0][0]              
__________________________________________________________________________________________________
add_3 (Add)                     (None, 32, 100, 128) 0           batch_normalization_85[0][0]     
                                                                 conv2d_177[0][0]                 
__________________________________________________________________________________________________
batch_normalization_86 (BatchNo (None, 32, 100, 128) 512         add_3[0][0]                      
__________________________________________________________________________________________________
activation_81 (Activation)      (None, 32, 100, 128) 0           batch_normalization_86[0][0]     
__________________________________________________________________________________________________
conv2d_179 (Conv2D)             (None, 16, 50, 256)  295168      activation_81[0][0]              
__________________________________________________________________________________________________
batch_normalization_87 (BatchNo (None, 16, 50, 256)  1024        conv2d_179[0][0]                 
__________________________________________________________________________________________________
conv2d_181 (Conv2D)             (None, 16, 50, 256)  295168      add_3[0][0]                      
__________________________________________________________________________________________________
activation_82 (Activation)      (None, 16, 50, 256)  0           batch_normalization_87[0][0]     
__________________________________________________________________________________________________
batch_normalization_88 (BatchNo (None, 16, 50, 256)  1024        conv2d_181[0][0]                 
__________________________________________________________________________________________________
conv2d_180 (Conv2D)             (None, 16, 50, 256)  590080      activation_82[0][0]              
__________________________________________________________________________________________________
add_4 (Add)                     (None, 16, 50, 256)  0           batch_normalization_88[0][0]     
                                                                 conv2d_180[0][0]                 
__________________________________________________________________________________________________
batch_normalization_89 (BatchNo (None, 16, 50, 256)  1024        add_4[0][0]                      
__________________________________________________________________________________________________
activation_83 (Activation)      (None, 16, 50, 256)  0           batch_normalization_89[0][0]     
__________________________________________________________________________________________________
conv2d_182 (Conv2D)             (None, 16, 50, 256)  590080      activation_83[0][0]              
__________________________________________________________________________________________________
batch_normalization_90 (BatchNo (None, 16, 50, 256)  1024        conv2d_182[0][0]                 
__________________________________________________________________________________________________
activation_84 (Activation)      (None, 16, 50, 256)  0           batch_normalization_90[0][0]     
__________________________________________________________________________________________________
conv2d_183 (Conv2D)             (None, 16, 50, 256)  590080      activation_84[0][0]              
__________________________________________________________________________________________________
up_sampling2d (UpSampling2D)    (None, 32, 100, 256) 0           conv2d_183[0][0]                 
__________________________________________________________________________________________________
concatenate_36 (Concatenate)    (None, 32, 100, 384) 0           up_sampling2d[0][0]              
                                                                 add_3[0][0]                      
__________________________________________________________________________________________________
batch_normalization_91 (BatchNo (None, 32, 100, 384) 1536        concatenate_36[0][0]             
__________________________________________________________________________________________________
activation_85 (Activation)      (None, 32, 100, 384) 0           batch_normalization_91[0][0]     
__________________________________________________________________________________________________
conv2d_184 (Conv2D)             (None, 32, 100, 256) 884992      activation_85[0][0]              
__________________________________________________________________________________________________
batch_normalization_92 (BatchNo (None, 32, 100, 256) 1024        conv2d_184[0][0]                 
__________________________________________________________________________________________________
conv2d_186 (Conv2D)             (None, 32, 100, 256) 884992      concatenate_36[0][0]             
__________________________________________________________________________________________________
activation_86 (Activation)      (None, 32, 100, 256) 0           batch_normalization_92[0][0]     
__________________________________________________________________________________________________
batch_normalization_93 (BatchNo (None, 32, 100, 256) 1024        conv2d_186[0][0]                 
__________________________________________________________________________________________________
conv2d_185 (Conv2D)             (None, 32, 100, 256) 590080      activation_86[0][0]              
__________________________________________________________________________________________________
add_5 (Add)                     (None, 32, 100, 256) 0           batch_normalization_93[0][0]     
                                                                 conv2d_185[0][0]                 
__________________________________________________________________________________________________
up_sampling2d_1 (UpSampling2D)  (None, 64, 200, 256) 0           add_5[0][0]                      
__________________________________________________________________________________________________
concatenate_37 (Concatenate)    (None, 64, 200, 320) 0           up_sampling2d_1[0][0]            
                                                                 add_2[0][0]                      
__________________________________________________________________________________________________
batch_normalization_94 (BatchNo (None, 64, 200, 320) 1280        concatenate_37[0][0]             
__________________________________________________________________________________________________
activation_87 (Activation)      (None, 64, 200, 320) 0           batch_normalization_94[0][0]     
__________________________________________________________________________________________________
conv2d_187 (Conv2D)             (None, 64, 200, 128) 368768      activation_87[0][0]              
__________________________________________________________________________________________________
batch_normalization_95 (BatchNo (None, 64, 200, 128) 512         conv2d_187[0][0]                 
__________________________________________________________________________________________________
conv2d_189 (Conv2D)             (None, 64, 200, 128) 368768      concatenate_37[0][0]             
__________________________________________________________________________________________________
activation_88 (Activation)      (None, 64, 200, 128) 0           batch_normalization_95[0][0]     
__________________________________________________________________________________________________
batch_normalization_96 (BatchNo (None, 64, 200, 128) 512         conv2d_189[0][0]                 
__________________________________________________________________________________________________
conv2d_188 (Conv2D)             (None, 64, 200, 128) 147584      activation_88[0][0]              
__________________________________________________________________________________________________
add_6 (Add)                     (None, 64, 200, 128) 0           batch_normalization_96[0][0]     
                                                                 conv2d_188[0][0]                 
__________________________________________________________________________________________________
up_sampling2d_2 (UpSampling2D)  (None, 128, 400, 128 0           add_6[0][0]                      
__________________________________________________________________________________________________
concatenate_38 (Concatenate)    (None, 128, 400, 160 0           up_sampling2d_2[0][0]            
                                                                 add_1[0][0]                      
__________________________________________________________________________________________________
batch_normalization_97 (BatchNo (None, 128, 400, 160 640         concatenate_38[0][0]             
__________________________________________________________________________________________________
activation_89 (Activation)      (None, 128, 400, 160 0           batch_normalization_97[0][0]     
__________________________________________________________________________________________________
conv2d_190 (Conv2D)             (None, 128, 400, 64) 92224       activation_89[0][0]              
__________________________________________________________________________________________________
batch_normalization_98 (BatchNo (None, 128, 400, 64) 256         conv2d_190[0][0]                 
__________________________________________________________________________________________________
conv2d_192 (Conv2D)             (None, 128, 400, 64) 92224       concatenate_38[0][0]             
__________________________________________________________________________________________________
activation_90 (Activation)      (None, 128, 400, 64) 0           batch_normalization_98[0][0]     
__________________________________________________________________________________________________
batch_normalization_99 (BatchNo (None, 128, 400, 64) 256         conv2d_192[0][0]                 
__________________________________________________________________________________________________
conv2d_191 (Conv2D)             (None, 128, 400, 64) 36928       activation_90[0][0]              
__________________________________________________________________________________________________
add_7 (Add)                     (None, 128, 400, 64) 0           batch_normalization_99[0][0]     
                                                                 conv2d_191[0][0]                 
__________________________________________________________________________________________________
up_sampling2d_3 (UpSampling2D)  (None, 256, 800, 64) 0           add_7[0][0]                      
__________________________________________________________________________________________________
concatenate_39 (Concatenate)    (None, 256, 800, 80) 0           up_sampling2d_3[0][0]            
                                                                 add[0][0]                        
__________________________________________________________________________________________________
batch_normalization_100 (BatchN (None, 256, 800, 80) 320         concatenate_39[0][0]             
__________________________________________________________________________________________________
activation_91 (Activation)      (None, 256, 800, 80) 0           batch_normalization_100[0][0]    
__________________________________________________________________________________________________
conv2d_193 (Conv2D)             (None, 256, 800, 32) 23072       activation_91[0][0]              
__________________________________________________________________________________________________
batch_normalization_101 (BatchN (None, 256, 800, 32) 128         conv2d_193[0][0]                 
__________________________________________________________________________________________________
conv2d_195 (Conv2D)             (None, 256, 800, 32) 23072       concatenate_39[0][0]             
__________________________________________________________________________________________________
activation_92 (Activation)      (None, 256, 800, 32) 0           batch_normalization_101[0][0]    
__________________________________________________________________________________________________
batch_normalization_102 (BatchN (None, 256, 800, 32) 128         conv2d_195[0][0]                 
__________________________________________________________________________________________________
conv2d_194 (Conv2D)             (None, 256, 800, 32) 9248        activation_92[0][0]              
__________________________________________________________________________________________________
add_8 (Add)                     (None, 256, 800, 32) 0           batch_normalization_102[0][0]    
                                                                 conv2d_194[0][0]                 
__________________________________________________________________________________________________
conv2d_196 (Conv2D)             (None, 256, 800, 4)  132         add_8[0][0]                      
==================================================================================================
Total params: 6,287,508
Trainable params: 6,280,212
Non-trainable params: 7,296
__________________________________________________________________________________________________
  • 4
    点赞
  • 72
    收藏
    觉得还不错? 一键收藏
  • 16
    评论
MATLAB ResUNet(具有Residual Blocks的UNET)是一种用于图像分割的深度学习模型。UNET是一种常用于图像分割任务的卷积神经网络架构,而ResUNet是对UNET架构进行改进后的版本。 ResUNet中的改进是引入了残差块(Residual Blocks)。残差块是一种通过跳跃连接(skip connection)将输入直接传递给输出的结构。这种设计使得网络能够更好地捕捉到不同层级的特征,同时减少了梯度消失的问题。 与传统UNET相比,MATLAB ResUNet具有以下优势: 1. 更强的特征表达能力:残差块的引入增强了网络的非线性表达能力,使得网络可以更好地学习到图像特征。 2. 更好的细节保留能力:传统UNET在进行下采样运算时会损失图像的细节信息,在进行上采样时无法完全恢复细节。而ResUNet通过跳跃连接将特征传递到上采样的过程中,有助于保留更多的细节信息。 3. 更快的收敛速度:由于引入了残差块,网络可以更快地收敛到最优解,加快了训练过程。 MATLAB ResUNet广泛应用于医学图像分割任务,例如肺部、肝脏和肿瘤等结构的分割。通过使用ResUNet,我们可以更准确地提取出感兴趣区域,并在医学诊断和治疗中发挥重要作用。 总之,MATLAB ResUNet是一种基于UNET架构并引入了残差块改进的深度学习模型,具有更强的特征表达能力、更好的细节保留能力和更快的收敛速度。它在医学图像分割等任务中有广泛的应用前景。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 16
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值