基于tensorflow下的DenseNet 简易搭建

基于tensorflow下的DenseNet 简易搭建

前言

笔者最近在研究深度学习网络,正好看到了DenseNet网络,觉得简易可实现,然后基于了一维数据集完成了DenseNet网络搭建。
DenseNet网络结构
在这里插入图片描述
他主要是在每一个Dense_block层中,每一个卷积层都与前面的卷积层相关,具体操作是tf.concat操作,简单理解一下,就是在每个卷积层数据数据,都要并联前几个卷积层数据,这与一般的ResNet不同。从下面一张图中,可以清晰显示出来。
在这里插入图片描述

代码实现

我们这里主要分了两部分dense_block与transition_layer,一个实现了denseNet的每一个模块的功能,另一个完成每一个模块的相连(BN+Relu+卷积+池化)
dense_block

def dense_block(p,layers_nums=3):  
    for i in range(layers_nums):
        x = tf.layers.batch_normalization(p)
        x = tf.nn.relu(x)
        x=tf.layers.conv2d(inputs=x, filters=4, kernel_size=[1,3], strides=1, padding='same', kernel_initializer=tf.random_normal_initializer(stddev=0.01))
        x=tf.concat([x, p], axis=3)
        p=x

p代表输入数据,layers_nums代表,、每个模块有多少卷积层。我们默认为3.
transition_layer

def transition_layer(x):
    x = tf.layers.batch_normalization(x)
    x = tf.nn.relu(x)
    n_inchannels = x.get_shape().as_list()[3]
    n_outchannels = int(n_inchannels * 0.5)
    x = tf.layers.conv2d(inputs=x, filters=n_outchannels, kernel_size=1, strides=1, padding='same', kernel_initializer=tf.random_normal_initializer(stddev=0.01))
    x = tf.layers.average_pooling2d(inputs=x, pool_size=[1,2], strides=2)
    return x

完成模块之间的连接
剩下的功能实现与正常CNN相似,就不详细介绍了,接下来贴出详细代码。

完整代码

import tensorflow as tf
import numpy as np
import tensorflow.contrib.slim as slim
import time

matix=np.loadtxt("D:/深度学习/all_feature/15—class_label15_four_train80.txt")  
matix1=np.loadtxt("D:/深度学习/all_feature/15—class_label15_four_test40.txt")
tf.reset_default_graph()
batch_size = 100
learning_rate = 0.001
learning_rate_decay = 0.95
#num_layers_in_dense_block=3
def conv_layer(x):
    return tf.layers.conv2d(inputs=x, filters=4, kernel_size=[1,3], strides=1, padding='same', kernel_initializer=tf.random_normal_initializer(stddev=0.01))

def dense_block(p,layers_nums=3):  
    for i in range(layers_nums):
        x = tf.layers.batch_normalization(p)
        x = tf.nn.relu(x)
        x=tf.layers.conv2d(inputs=x, filters=4, kernel_size=[1,3], strides=1, padding='same', kernel_initializer=tf.random_normal_initializer(stddev=0.01))
        x=tf.concat([x, p], axis=3)
        p=x
    return x
def transition_layer(x):
    x = tf.layers.batch_normalization(x)
    x = tf.nn.relu(x)
    n_inchannels = x.get_shape().as_list()[3]
    n_outchannels = int(n_inchannels * 0.5)
    x = tf.layers.conv2d(inputs=x, filters=n_outchannels, kernel_size=1, strides=1, padding='same', kernel_initializer=tf.random_normal_initializer(stddev=0.01))
    x = tf.layers.average_pooling2d(inputs=x, pool_size=[1,2], strides=2)
    return x
def inference(inputs):
    x = tf.reshape(inputs,[-1,1,1015,1])
    conv_1 = conv_layer(x)
    dense_1 = dense_block(conv_1,3)
    block_1=transition_layer(dense_1)
#    dense_2 = dense_block(block_1,4)
#    block_2=transition_layer(dense_2)

    net_flatten = slim.flatten(block_1,scope='flatten')
    fc_1 = slim.fully_connected(slim.dropout(net_flatten,0.8),200,activation_fn=tf.nn.tanh,scope='fc_1')
    output = slim.fully_connected(slim.dropout(fc_1,0.8),15,activation_fn=None,scope='output_layer')
    return output,net_flatten

def train():
    x = tf.placeholder(tf.float32, [None, 1015])
    y = tf.placeholder(tf.float32, [None, 15])
    y_outputs,net = inference(x)
    global_step = tf.Variable(0, trainable=False)

    entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y_outputs, labels=tf.argmax(y, 1))
    loss = tf.reduce_mean(entropy)
    train_op = tf.train.AdamOptimizer(learning_rate).minimize(loss, global_step=global_step)

    prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_outputs, 1))
    accuracy = tf.reduce_mean(tf.cast(prediction, tf.float32))

#    saver = tf.train.Saver()
    print('Start to train the DenseNet......')
    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        for i in range(3000):
            train_op_, loss_, step = sess.run([train_op, loss, global_step], feed_dict={x: matix[:,:1015], y: matix[:,1015:]})
            if i % 1 == 0:

                
                result = sess.run(loss, feed_dict={x: matix1[:,:1015], y: matix1[:,1015:]})
                acc_1=sess.run(accuracy, feed_dict={x: matix1[:,:1015], y: matix1[:,1015:]})
                
                print(str(i)+' '+str(result)+' '+str(acc_1))
def main(_):
    train()
if __name__ == '__main__':
    t0 = time.clock()
    tf.app.run()
    print(time.clock() - t0, " seconds process time ")
  • 2
    点赞
  • 24
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
DenseNet是一种非常有效的卷积神经网络架构,它具有非常高的参数利用率和良好的训练性能。在这里,我们将使用TensorFlow来构建一个DenseNet。 首先,我们需要导入所需的库: ```python import tensorflow as tf from tensorflow.keras.layers import Dense, Conv2D, BatchNormalization, Activation, Concatenate, AvgPool2D, Flatten ``` 接下来,我们可以定义DenseNet的各个组件。我们将使用DenseBlock和TransitionBlock两种类型的块来构建DenseNet。DenseBlock将由若干个卷积层组成,每个卷积层都会接受前面所有卷积层的输出作为输入,并将它们连接在一起。TransitionBlock用于将特征图的尺寸减半,并减少通道数,以便在下一层中使用更少的内存和计算资源。 ```python class DenseBlock(tf.keras.layers.Layer): def __init__(self, num_layers, growth_rate): super(DenseBlock, self).__init__() self.num_layers = num_layers self.growth_rate = growth_rate def build(self, input_shape): self.conv_layers = [] for i in range(self.num_layers): self.conv_layers.append(Conv2D(filters=self.growth_rate, kernel_size=(3, 3), padding='same')) self.batch_norm = BatchNormalization() def call(self, inputs): x = inputs for i in range(self.num_layers): y = self.conv_layers[i](x) x = Concatenate()([x, y]) x = self.batch_norm(x) return x class TransitionBlock(tf.keras.layers.Layer): def __init__(self, num_output_channels): super(TransitionBlock, self).__init__() self.num_output_channels = num_output_channels def build(self, input_shape): self.conv_layer = Conv2D(filters=self.num_output_channels, kernel_size=(1, 1), padding='same') self.avg_pool = AvgPool2D(pool_size=(2, 2), strides=(2, 2)) self.batch_norm = BatchNormalization() def call(self, inputs): x = self.conv_layer(inputs) x = self.avg_pool(x) x = self.batch_norm(x) return x ``` 现在我们可以定义整个DenseNet模型了。我们将使用Sequential模型,并在其中添加DenseBlock和TransitionBlock。我们还将添加一些最终的卷积层和全连接层来生成输出。 ```python def make_densenet(input_shape, num_classes, num_blocks=3, growth_rate=12, num_filters=16): model = tf.keras.Sequential() # initial convolution layer model.add(Conv2D(filters=num_filters, kernel_size=(7, 7), strides=(2, 2), padding='same', input_shape=input_shape)) model.add(BatchNormalization()) model.add(Activation('relu')) model.add(MaxPool2D(pool_size=(3, 3), strides=(2, 2), padding='same')) # dense blocks and transition blocks for i in range(num_blocks): model.add(DenseBlock(num_layers=4, growth_rate=growth_rate)) num_filters += 4 * growth_rate if i != num_blocks - 1: model.add(TransitionBlock(num_filters // 2)) num_filters = num_filters // 2 # final layers model.add(Flatten()) model.add(Dense(units=num_classes, activation='softmax')) return model ``` 现在我们可以使用上面的代码来创建一个DenseNet模型。以下是一个示例: ```python input_shape = (224, 224, 3) num_classes = 10 model = make_densenet(input_shape=input_shape, num_classes=num_classes) model.summary() ``` 这将创建一个具有10个类的DenseNet模型,并打印出模型的摘要。现在你可以根据需要调整模型的参数或添加其他层。
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值