Keras实现 DenseNet

本文介绍了如何使用Keras实现DenseNet网络结构,包括Conv_block、Transition_block和Dense_block的详细解释,以及如何构建整个DenseNet模型。
摘要由CSDN通过智能技术生成

Keras实现 DenseNet

2018年04月11日 09:24:00 Manfestain 阅读数 79

版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。

本文链接:https://blog.csdn.net/Beans___Lee/article/details/83964958

参考自https://github.com/titu1994/DenseNet/blob/master/densenet.py


先来一张图,便于理解网络结构,推荐的dense_block一般是3。两个dense_block之间的就是过渡层。每个dense_block内部都使用密集连接。

 

7239122-0e09d57164ca243f.png


Conv_block:

卷积操作,按照论文的说法,这里应该是一个组合函数,分别为:BatchNormalization、ReLU和3x3 Conv。

 
  1. def conv_block(ip, nb_filter, bottleneck=False, dropout_rate=None, weight_decay=1e-4):

  2. ''' Apply BatchNorm, Relu, 3x3 Conv2D, optional bottleneck block and dropout

  3. Args:

  4. ip: Input keras tensor

  5. nb_filter: number of filters

  6. bottleneck: add bottleneck block

  7. dropout_rate: dropout rate

  8. weight_decay: weight decay factor

  9. Returns: keras tensor with batch_norm, relu and convolution2d added (optional bottleneck)

  10. '''

  11. concat_axis = 1 if K.image_data_format() == 'channel_first' else -1

  12.  
  13. x = BatchNormalization(axis=concat_axis, epsilon=1.1e-5)(ip)

  14. x = Activation('relu')(x)

  15.  
  16. if bottleneck:

  17. inter_channel = nb_filter * 4

  18. x = Conv2D(inter_channel, (1, 1), kernel_initializer='he_normal', padding='same', use_bias=False,

  19. kernel_regularizer=l2(weight_decay))(x)

  20. x = BatchNormalization(axis=concat_axis, epsilon=1.1e-5)(x)

  21. x = Activation('relu')(x)

  22.  
  23. x = Conv2D(nb_filter, (3, 3), kernel_initializer='he_normal', padding='same', use_bias=False)(x)

  24.  
  25. if dropout_rate:

  26. x = Dropout(dropout_rate)(x)

  27.  
  28. return x

其中的concat_axis表示特征轴,因为连接和BN都是对特征轴而言的。bottleneck表示是否使用瓶颈层,也就是使用1x1的卷继层将特征图的通道数进行压缩。


Transition_block:

过渡层,用来连接两个dense_block。同时在最后一个dense_block的尾部不需要使用过渡层。按照论文的说法,过渡层由四部分组成:BatchNormalization、ReLU、1x1Conv和2x2Maxpooling。

 
  1. def transition_block(ip, nb_filter, compression=1.0, weight_decay=1e-4):

  2. '''Apply BatchNorm, ReLU, Conv2d, optional compressoin, dropout and Maxpooling2D

  3. Args:

  4. ip: keras tensor

  5. nb_filter: number of filters

  6. compression: caculated as 1 - reduction. Reduces the number of features maps in the transition block

  7. dropout_rate: dropout rate

  8. weight_decay: weight decay factor

  9. Returns:

  10. keras tensor, after applying batch_norm, relu-conv, dropout, maxpool

  11. '''

  12. concat_axis = 1 if K.image_data_format() == 'channels_first' else -1

  13.  
  14. x = BatchNormalization(axis=concat_axis, epsilon=1.1e-5)(ip)

  15. x = Activation('relu')(x)

  16. x = Conv2D(int(nb_filter * compression), (1, 1), kernel_initializer='he_normal', padding='same', use_bias=False,

  17. kernel_regularizer=l2(weight_decay))(x)

  18. x = AveragePooling2D((2, 2), strides=(2, 2))(x)

  19.  
  20. return x

其中的Conv2D操作实现了1x1的卷积操作,同时使用了compression_rate,也就是论文中说的压缩率,将通道数进行调整。


Dense_block:

此处使用循环实现了dense_block的密集连接。

 
  1. def dense_block(x, nb_layers, nb_filter, growth_rate, bottleneck=False, dropout_rate=None, weight_decay=1e-4,

  2. grow_nb_filters=True, return_concat_list=False):

  3. '''Build a dense_block where the output of ench conv_block is fed t subsequent ones

  4. Args:

  5. x: keras tensor

  6. nb_layser: the number of layers of conv_block to append to the model

  7. nb_filter: number of filters

  8. growth_rate: growth rate

  9. bottleneck: bottleneck block

  10. dropout_rate: dropout rate

  11. weight_decay: weight decay factor

  12. grow_nb_filters: flag to decide to allow number of filters to grow

  13. return_concat_list: return the list of feature maps along with the actual output

  14. Returns:

  15. keras tensor with nb_layers of conv_block appened

  16. '''

  17.  
  18. concat_axis = 1 if K.image_data_format() == 'channels_first' else -1

  19.  
  20. x_list = [x]

  21.  
  22. for i in range(nb_layers):

  23. cb = conv_block(x, growth_rate, bottleneck, dropout_rate, weight_decay)

  24. x_list.append(cb)

  25. x = concatenate([x, cb], axis=concat_axis)

  26.  
  27. if grow_nb_filters:

  28. nb_filter += growth_rate

  29.  
  30. if return_concat_list:

  31. return x, nb_filter, x_list

  32. else:

  33. return x, nb_filter

其中的x = concatenate([x, cb], axis=concat_axis)操作使得x在每次循环中始终维护一个全局状态,第一次循环输入为x,输出为cb1,第二的输入为cb=[x, cb1],输出为cb2,第三次的输入为cb=[x, cb1, cb2],输出为cb3,以此类推。增长率growth_rate其实就是每次卷积时使用的卷积核个数,也就是最后输出的通道数。


Create_dense_net:

构建网络模型:

 
  1. def create_dense_net(nb_classes, img_input, include_top, depth=40, nb_dense_block=3, growth_rate=12, nb_filter=-1,

  2. nb_layers_per_block=[1], bottleneck=False, reduction=0.0, dropout_rate=None, weight_decay=1e-4,

  3. subsample_initial_block=False, activation='softmax'):

  4. ''' Build the DenseNet model

  5. Args:

  6. nb_classes: number of classes

  7. img_input: tuple of shape (channels, rows, columns) or (rows, columns, channels)

  8. include_top: flag to include the final Dense layer

  9. depth: number or layers

  10. nb_dense_block: number of dense blocks to add to end (generally = 3)

  11. growth_rate: number of filters to add per dense block

  12. nb_filter: initial number of filters. Default -1 indicates initial number of filters is 2 * growth_rate

  13. nb_layers_per_block: list, number of layers in each dense block

  14. bottleneck: add bottleneck blocks

  15. reduction: reduction factor of transition blocks. Note : reduction value is inverted to compute compression

  16. dropout_rate: dropout rate

  17. weight_decay: weight decay rate

  18. subsample_initial_block: Set to True to subsample the initial convolution and

  19. add a MaxPool2D before the dense blocks are added.

  20. subsample_initial:

  21. activation: Type of activation at the top layer. Can be one of 'softmax' or 'sigmoid'.

  22. Note that if sigmoid is used, classes must be 1.

  23. Returns: keras tensor with nb_layers of conv_block appended

  24. '''

  25.  
  26. concat_axis = 1 if K.image_data_format() == 'channel_first' else -1

  27.  
  28. if type(nb_layers_per_block) is not list:

  29. print('nb_layers_per_block should be a list!!!')

  30. return 0

  31.  
  32. final_nb_layer = nb_layers_per_block[-1]

  33. nb_layers = nb_layers_per_block[:-1]

  34.  
  35. if nb_filter <= 0:

  36. nb_filter = 2 * growth_rate

  37. compression = 1.0 - reduction

  38. if subsample_initial_block:

  39. initial_kernel = (7, 7)

  40. initial_strides = (2, 2)

  41. else:

  42. initial_kernel = (3, 3)

  43. initial_strides = (1, 1)

  44.  
  45. x = Conv2D(nb_filter, initial_kernel, kernel_initializer='he_normal', padding='same',

  46. strides=initial_strides, use_bias=False, kernel_regularizer=l2(weight_decay))(img_input)

  47. if subsample_initial_block:

  48. x = BatchNormalization(axis=concat_axis, epsilon=1.1e-5)(x)

  49. x = Activation('relu')(x)

  50. x = MaxPooling2D((3, 3), strides=(2, 2), padding='same')(x)

  51.  
  52. for block_index in range(nb_dense_block - 1):

  53. x, nb_filter = dense_block(x, nb_layers[block_index], nb_filter, growth_rate, bottleneck=bottleneck,

  54. dropout_rate=dropout_rate, weight_decay=weight_decay)

  55. x = transition_block(x, nb_filter, compression=compression, weight_decay=weight_decay)

  56. nb_filter = int(nb_filter * compression)

  57.  
  58. # 最后一个block没有transition_block

  59. x, nb_filter = dense_block(x, final_nb_layer, nb_filter, growth_rate, bottleneck=bottleneck,

  60. dropout_rate=dropout_rate, weight_decay=weight_decay)

  61.  
  62. x = BatchNormalization(axis=concat_axis, epsilon=1.1e-5)(x)

  63. x = Activation('relu')(x)

  64. x = GlobalAveragePooling2D()(x)

  65.  
  66. if include_top:

  67. x = Dense(nb_classes, activation=activation)(x)

  68.  
  69. return x

生成Model:

 
  1. inputs = Input(tensor=img_input, shape=input_shape)

  2. x = create_dense_net(classes=1000, img_input, include_top=True, depth=169, nb_dense_block=4,

  3. growth_rate=32, nb_filter=64, nb_layers_per_block=[6, 12, 32, 32], bottleneck=True, reduction=0.5,

  4. dropout_rate=0.0, weight_decay=1e-4, subsample_initial_blockTrue, activation='softmax')

  5. model = Model(inputs, x, name='densenet169')

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值