在上一篇博客中对VGGNet论文做了学习笔记以及网络结构记录,链接见此http://blog.csdn.net/qq_29340857/article/details/71440674,本篇博客的代码都是基于上一篇的理解写的,话不多说,code如下。
输入数据集定义(因为使用随机梯度下降,所以使用placeholder定义训练数据):
tf_train_data=tf.placeholder(tf.float32,shape=(batch_size,img_size,img_size,3))
tf_train_label=tf.placeholder(tf.float32,shape=(batch_size,img_num))
tf_valid_data=tf.constant(valid_data)
tf_test_data=tf.constant(test_data)
参数初始化(不算pool层,fc层+conv层一共是16层神经网络):
w={
'w1':tf.get_variable('w1',[3,3,3,64],initializer=tf.contrib.layers.xavier_initializer_conv2d()),
'w2':tf.get_variable('w2',[3,3,64,64],initializer=tf.contrib.layers.xavier_initializer_conv2d()),
'w3':tf.get_variable('w3',[3,3,64,128],initializer=tf.contrib.layers.xavier_initializer_conv2d()),
'w4':tf.get_variable('w4',[3,3,128,128],initializer=tf.contrib.layers.xavier_initializer_conv2d()),
'w5':tf.get_variable('w5',[3,3,128,256],initializer=tf.contrib.layers.xavier_initializer_conv2d()),
'w6':tf.get_variable('w6',[3,3,256,256],initializer=tf.contrib.layers.xavier_initializer_conv2d()),
'w7':tf.get_variable('w7',[3,3,256,256],initializer=tf.contrib.layers.xavier_initializer_conv2d()),
'w8':tf.get_variable('w8',[3,3,256,512],initializer=tf.contrib.layers.xavier_initializer_conv2d()),
&#