记录:简单复现一篇论文中的神经网络模型

论文部分

在这里插入图片描述
链接: https://opg.optica.org/ol/abstract.cfm?uri=ol-46-22-5667

由摘要可以提取到关于神经网络的有用信息:网络输入为光谱信息(通过论文后面自制的暗场显微镜获取),输出(也就是需要预测的数据)是层折射率(n)和厚度(d)。
接下来继续读文章,后面有这样一段话结合附图等信息可大致介绍了网络结构:

原文

Therefore, the residual structure is a better choice than the traditional convolution neural network. The spectral feature capture module mentioned above has two residual blocks. The spectral feature capture module mentioned above has two residual blocks. Each block has two convolution blocks, each of which includes two convolution layers and two batch normalization layers. Simultaneously, before residual blocks, an independent convolution layer was used to extract the entire spectral primary features. This convolution layer is located at the input position. The extracted full spectrum features are then fed into the parameter deduction module to determine the multiple sensor parameters. A total of 135,500 samples were used to construct the model.The specific features of the model are presented in Fig.

附图

图注:Scheme of our deep learning model based on a convolution network. C/L is a convolution layer with the activation function of LeakyReLU. BN refers to batch normalization. The full scattering spectra, as inputs of the deep learning model, are processed first by the spectral feature capture module and subsequently by the parameter deduction module. Therefore, the environmental parameters [layer refractive index (n) and layer thickness (d) in this Letter] of plasmon sensing can be predicted. Among these steps, the spectral feature capture module is a convolution network with residual blocks, and the parameter deduction module is a multilayer perceptron network. The same color represents the same network type, but the specific parameters are different. Here the dark- field scattering spectral with 3000 data points is regarded as the input matrix, and the two predicted sensing parameters are regarded as the outputs. The multilayer perceptron network has seven layers.

大意就是所使用的模型的光谱特征提取模块(所述神经网络模型包括两大模块,一是所述光谱特征提取模块,二则是后面提到的参数计算模块)采用了残差结构,共有两个残差块,而每个残差块(Residual blocks)又包含两个含LeakyReLU层的卷积层(CNN)和两个批量归一化(BN)层,同时在所述残差块前还存在一个独立的卷积层用来提取整个谱的主要特征(emmm…话说这个不就是预处理嘛(lll¬ω¬))。后面的参数计算模块由多层感知机构成,通过训练后该网络模型可以对所述参数:层折射率和厚度进行预测。

然后通过查阅该论文的补充信息获取到更为详细的网络信息:
在这里插入图片描述
在这里插入图片描述
由此可绘制出网络的大致结构:
在这里插入图片描述

吐槽下WPS水印, 真狗
其实是不想开会员又懒得用PS

接下来开始网络复现。

模型部分

此处采用Tensorflow和Keras框架并基于Python语言进行复现
参考经典网络模型ResNet中的残差块

结构

在这里插入图片描述

源码

源码地址:https://github.com/JYNi16/Deeplearning-Beginner/tree/master/ResNet

class ResBlockUp(layers.Layer):
    def __init__(self, out_channel):
        super(ResBlockUp, self).__init__()
        self.c1 = layers.Conv2D(out_channel, kernel_size=3, strides=1, padding="same", use_bias=False)
        self.bn1 = layers.BatchNormalization()
        self.r1 = layers.Activation("relu")
        self.c2 = layers.Conv2D(out_channel, kernel_size=3, strides=1, padding="same", use_bias=False)
        self.bn2 = layers.BatchNormalization()
        self.r2 = layers.Activation("relu")
    def call(self, x):
        res = x
        x = self.c1(x)
        x = self.bn1(x)
        x = self.r1(x)
        x = self.c2(x)
        x = self.bn2(x)
        x = x + res
        x = self.r2(x)
        return x

在此基础上进行修改后可得到本文中所使用的残差块

class ResidualBlock(layers.Layer):
    # 残差模块
    def __init__(self, filter_num, stride=1):
        super(ResidualBlock, self).__init__()
        # 第一个卷积单元
        self.conv1 = layers.Conv2D(filter_num, (3, 3), strides=stride, padding='same')
        # self.relu = layers.Activation('relu')
        self.LeakyReLU = layers.advanced_activations.LeakyReLU(alpha=0.2)
        self.bn1 = layers.BatchNormalization() 
        

        # 第二个卷积单元
        self.conv2 = layers.Conv2D(filter_num, (3, 3), strides=1, padding='same')
        self.LeakyReLU = layers.advanced_activations.LeakyReLU(alpha=0.2)
        self.bn2 = layers.BatchNormalization()

        # 第三个卷积单元
        self.conv3 = layers.Conv2D(filter_num, (3, 3), strides=stride, padding='same')
        # self.relu = layers.Activation('relu')
        self.LeakyReLU = layers.advanced_activations.LeakyReLU(alpha=0.2)
        self.bn3 = layers.BatchNormalization() 
        

        # 第四个卷积单元
        self.conv4 = layers.Conv2D(filter_num, (3, 3), strides=1, padding='same')
        self.LeakyReLU = layers.advanced_activations.LeakyReLU(alpha=0.2)
        self.bn4 = layers.BatchNormalization()

        if stride != 1:# 通过1x1卷积完成shape匹配
            self.downsample = Sequential()
            self.downsample.add(layers.Conv2D(filter_num, (1, 1), strides=stride))
        else:# shape匹配,直接连接
            self.downsample = lambda x:x

    def call(self, inputs, training=None):

        # [b, h, w, c],通过第一个卷积单元
        out = self.conv1(inputs)
        out = self.LeakyReLU(out)
        out = self.bn1(out)
        
        # 通过第二个卷积单元
        out = self.conv2(out)
        out = self.LeakyReLU(out)
        out = self.bn2(out)

        # 通过第三个卷积单元
        out = self.conv3(inputs)
        out = self.LeakyReLU(out)
        out = self.bn3(out)
        
        # 通过第四个卷积单元
        out = self.conv4(out)
        out = self.LeakyReLU(out)
        out = self.bn4(out)


        # 通过identity模块
        identity = self.downsample(inputs)
        # 2条路径输出直接相加
        output = layers.add([out, identity])
        output = tf.nn.relu(output) # 激活函数

        return output

为了便于阅读, 此处仅采用TensorFlow 1.x 对完整代码进行编写,并且增加相关注释(采用的mnist数据集仅作占位处理, 由于尺寸问题无法正常训练)。

import numpy as np
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data

# 载入数据集
mnist = input_data.read_data_sets("MNIST_data",one_hot=True)

# 每个批次的大小
batch_size = 100
# 计算一共有多少个批次
n_batch = mnist.train.num_examples // batch_size

# 权值初始化
def weight_variable(shape):
    initial = tf.truncated_normal(shape,stddev=0.1)
    # 生成一个截断的正态分布,其标准差为0.1
    return tf.Variable(initial)

# 偏置初始化 
def bias_variable(shape):
    initial = tf.constant(0.1,shape=shape)
    return tf.Variable(initial)

# 卷积层
def conv2d(x,W):
    # x为输入的tensor,其形状为[batch, in_height, in_width, in_channels],具体含义是[训练时一个batch的图片数量, 图片高度, 图片宽度, 图像通道数]
    # W为卷积核(滤波器),其形状为[filter_height, filter_width, in_channels, out_channels],具体含义是[卷积核的高度,卷积核的宽度,图像通道数,卷积核个数]
    # strides[0]和strides[3]的两个1是默认值,strides[1]代表x方向的步长,strides[2]代表y方向的步长
    # padding决定卷积方式,SAME会在外面补0
    return tf.nn.conv2d(x,W,strides=[1,1,1,1],padding="SAME") 

# 池化层(最大值池化)
def max_pool_2x2(x):
    # ksize[0]和ksize[3]默认为1,中间的2,2为池化窗口的大小
    # strides同conv2d,明显x,y方向的步长均为2
    return tf.nn.max_pool(x,ksize=[1,2,2,1],strides=[1,2,2,1],padding="SAME")

# 定义两个placeholder
x = tf.placeholder(tf.float32,[None,784])
y = tf.placeholder(tf.float32,[None,2])

# 改变x格式为4D的向量
x_image = tf.reshape(x,[-1,28,28,1]) 
# 第二个参数 : [batch, in_height, in_width, in_channels] [训练时一个batch的图片数量, 图片高度, 图片宽度, 图像通道数]
# 其中-1在程序运行后将会被赋值为100(即每批次中包含的图片的数量)

'''输入张量预处理'''
W_conv1 = weight_variable([3,3,1,16]) 

h_conv1 = tf.nn.leaky_relu(conv2d(x_image,W_conv1))

h_pool1 = max_pool_2x2(h_conv1)

'''第一个残差块'''
W_conv2 = weight_variable([3,3,16,32])

h_conv2 = tf.nn.leaky_relu(conv2d(h_pool1,W_conv2))

R_conv2 = x = tf.layers.batch_normalization(h_conv2, training=True)

W_conv3 = weight_variable([3,3,32,32])

h_conv3 = tf.nn.leaky_relu(conv2d(R_conv2,W_conv3))

R_conv3 = x = tf.layers.batch_normalization(h_conv3, training=True)

'''第二个残差块'''
W_conv4 = weight_variable([3,3,32,64])

h_conv4 = tf.nn.leaky_relu(conv2d(R_conv3,W_conv4))

R_conv4 = x = tf.layers.batch_normalization(h_conv4, training=True)

W_conv5 = weight_variable([3,3,64,64])

h_conv5 = tf.nn.leaky_relu(conv2d(R_conv4,W_conv5))

R_conv5 = x = tf.layers.batch_normalization(h_conv5, training=True)

# 加个池化
h_pool2 = max_pool_2x2(R_conv5)

# 加和
# 把池化层的输出扁平化为1维
h_pool1_flat = tf.reshape(h_pool1,[-1,7*7*64])

h_pool2_flat = tf.reshape(h_pool2,[-1,7*7*64])

add = tf.add(h_pool1_flat, h_pool2_flat)

'''构建多层感知机(7层)'''
# 1层
W_fc1 = weight_variable([7*7*64,300]) 

h_fc1 = tf.nn.leaky_relu(tf.matmul(h_pool2_flat,W_fc1)) 

# 2层
W_fc2 = weight_variable([300,200]) 

h_fc2 = tf.nn.leaky_relu(tf.matmul(h_fc1,W_fc2)) 

# 3层
W_fc3 = weight_variable([200,100]) 

h_fc3 = tf.nn.leaky_relu(tf.matmul(h_fc2,W_fc3)) 

# 4层
W_fc4 = weight_variable([100,50]) 

h_fc4 = tf.nn.leaky_relu(tf.matmul(h_fc3,W_fc4)) 

# 5层
W_fc5 = weight_variable([50,20]) 

h_fc5 = tf.nn.leaky_relu(tf.matmul(h_fc4,W_fc5)) 

# 6层
W_fc6 = weight_variable([20,10]) 

h_fc6 = tf.nn.leaky_relu(tf.matmul(h_fc5,W_fc6)) 

# 6层
W_fc7 = weight_variable([10,2]) 

h_fc7 = tf.nn.leaky_relu(tf.matmul(h_fc6,W_fc7)) 

'''暂时不用
# 用keep_prob来表示神经元的输出概率(dropout)
keep_prob = tf.placeholder(tf.float32)
h_fc1_drop = tf.nn.dropout(h_fc1,keep_prob)
'''

# 交叉熵代价函数
cross_entropy = tf.reduce_mean(tf.square(h_fc7  - y))
# 使用AdamOptimizer进行优化
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
# 结果存放在一个bool型列表中
correct_prediction = tf.equal(tf.argmax(h_fc7 ,1),tf.argmax(y,1)) # argmax返回一维张量中最大值所在的位置
# 求准确率
accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))

# saver = tf.train.Saver()

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    for epoch in range(11):
        for batch in range(n_batch):
            # 传入数据
            batch_xs,batch_ys =  mnist.train.next_batch(batch_size)
            
            batch_xs = np.reshape(batch_xs, (-1, 14, 14, 64))

            sess.run(train_step,feed_dict={x:batch_xs,y:batch_ys})
            '''
            # 用70%的神经元训练网络(dropout)
            sess.run(train_step,feed_dict={x:batch_xs,y:batch_ys,keep_prob:0.7})
            '''
        '''
        # 运行100%的神经元来检测网络准确率
        acc = sess.run(accuracy,feed_dict={x:mnist.test.images,y:mnist.test.labels,keep_prob:1.0})
        '''
        acc = sess.run(accuracy,feed_dict={x:mnist.test.images,y:mnist.test.labels})
        # 输出
        print ("Iter " + str(epoch) + ", Testing Accuracy= " + str(acc))
       
    # saver.save(sess, r"D:\anaconda\vscode-python\learn\OpenCV_and_TensorFlow\Net\Net_MNIST\my_net.ckpt")

后续博主将寻找合适的数据集进行训练(光谱(高光谱我没说你…)有关的数据集是真的找不到啊QAQ)

  • 2
    点赞
  • 13
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值