视觉问题引入深度神经网络后,针对端对端的训练和预测网络,可以看是特征的表达和任务的决策问题(分类,回归等)。当我们自己的训练数据量过小时,往往借助牛人已经预训练好的网络进行特征的提取,然后在后面加上自己特定任务的网络进行调优。目前,ILSVRC比赛(针对1000类的分类问题)所使用数据的训练集126万张图像,验证集5万张,测试集10万张(标注未公布),大家一般使用这个比赛的前几名的网络来搭建自己特定任务的神经网络。
本篇博文主要简单讲述怎么使用TensorFlow调用预训练好的VGG网络,其他的网络(如Alex, ResNet等)也是同样的套路。分为三个部分:第一部分下载网络架构定义以及权重参数,第二部分是如何调用预训练网络中的feature map,第三部分给出参考资料。注:资料是学习查找整理而得,理解有误的地方,请多多指正~
一、下载网络架构定义以及权重参数
https://github.com/leihe001/tensorflow-vgg 训练和测试网络的定义
https://mega.nz/#!YU1FWJrA!O1ywiCS2IiOlUCtCpI6HTJOMrneN-Qdv3ywQP5poecM VGG16
https://mega.nz/#!xZ8glS6J!MAnE91ND_WyfZ_8mvkuSa2YcA7q-1ehfSm-Q1fxOvvs VGG19
二、调用预训练网络中的feature map(以VGG16为例)
- import inspect
- import os
-
- import numpy as np
- import tensorflow as tf
- import time
-
- VGG_MEAN = [103.939, 116.779, 123.68]
-
-
- class Vgg16:
- def __init__(self, vgg16_npy_path=None):
- if vgg16_npy_path is None:
- path = inspect.getfile(Vgg16)
- path = os.path.abspath(os.path.join(path, os.pardir))
- path = os.path.join(path, "vgg16.npy")
- vgg16_npy_path = path
- print path
- <span style="white-space:pre"> </span><span style="color:#ff0000;"># 加载网络权重参数</span>
- self.data_dict = np.load(vgg16_npy_path, encoding='latin1').item()
- print("npy file loaded")
-
- def build(self, rgb):
-
-
-
-
-
-
- start_time = time.time()
- print("build model started")
- rgb_scaled = rgb * 255.0
-
-
- red, green, blue = tf.split(3, 3, rgb_scaled)
- assert red.get_shape().as_list()[1:] == [224, 224, 1]
- assert green.get_shape().as_list()[1:] == [224, 224, 1]
- assert blue.get_shape().as_list()[1:] == [224, 224, 1]
- bgr = tf.concat(3, [
- blue - VGG_MEAN[0],
- green - VGG_MEAN[1],
- red - VGG_MEAN[2],
- ])
- assert bgr.get_shape().as_list()[1:] == [224, 224, 3]
-
- self.conv1_1 = self.conv_layer(bgr, "conv1_1")
- self.conv1_2 = self.conv_layer(self.conv1_1, "conv1_2")
- self.pool1 = self.max_pool(self.conv1_2, 'pool1')
-
- self.conv2_1 = self.conv_layer(self.pool1, "conv2_1")
- self.conv2_2 = self.conv_layer(self.conv2_1, "conv2_2")
- self.pool2 = self.max_pool(self.conv2_2, 'pool2')
-
- self.conv3_1 = self.conv_layer(self.pool2, "conv3_1")
- self.conv3_2 = self.conv_layer(self.conv3_1, "conv3_2")
- self.conv3_3 = self.conv_layer(self.conv3_2, "conv3_3")
- self.pool3 = self.max_pool(self.conv3_3, 'pool3')
-
- self.conv4_1 = self.conv_layer(self.pool3, "conv4_1")
- self.conv4_2 = self.conv_layer(self.conv4_1, "conv4_2")
- self.conv4_3 = self.conv_layer(self.conv4_2, "conv4_3")
- self.pool4 = self.max_pool(self.conv4_3, 'pool4')
-
- self.conv5_1 = self.conv_layer(self.pool4, "conv5_1")
- self.conv5_2 = self.conv_layer(self.conv5_1, "conv5_2")
- self.conv5_3 = self.conv_layer(self.conv5_2, "conv5_3")
- self.pool5 = self.max_pool(self.conv5_3, 'pool5')
-
- self.fc6 = self.fc_layer(self.pool5, "fc6")
- assert self.fc6.get_shape().as_list()[1:] == [4096]
- self.relu6 = tf.nn.relu(self.fc6)
-
- self.fc7 = self.fc_layer(self.relu6, "fc7")
- self.relu7 = tf.nn.relu(self.fc7)
-
- self.fc8 = self.fc_layer(self.relu7, "fc8")
-
- self.prob = tf.nn.softmax(self.fc8, name="prob")
-
- self.data_dict = None
- print("build model finished: %ds" % (time.time() - start_time))
-
- def avg_pool(self, bottom, name):
- return tf.nn.avg_pool(bottom, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name=name)
-
- def max_pool(self, bottom, name):
- return tf.nn.max_pool(bottom, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name=name)
-
- def conv_layer(self, bottom, name):
- with tf.variable_scope(name):
- filt = self.get_conv_filter(name)
-
- conv = tf.nn.conv2d(bottom, filt, [1, 1, 1, 1], padding='SAME')
-
- conv_biases = self.get_bias(name)
- bias = tf.nn.bias_add(conv, conv_biases)
-
- relu = tf.nn.relu(bias)
- return relu
-
- def fc_layer(self, bottom, name):
- with tf.variable_scope(name):
- shape = bottom.get_shape().as_list()
- dim = 1
- for d in shape[1:]:
- dim *= d
- x = tf.reshape(bottom, [-1, dim])
-
- weights = self.get_fc_weight(name)
- biases = self.get_bias(name)
-
-
-
- fc = tf.nn.bias_add(tf.matmul(x, weights), biases)
-
- return fc
-
- def get_conv_filter(self, name):
- return tf.constant(self.data_dict[name][0], name="filter")
-
- def get_bias(self, name):
- return tf.constant(self.data_dict[name][1], name="biases")
-
- def get_fc_weight(self, name):
- return tf.constant(self.data_dict[name][0], name="weights")
以上是VGG16网络的定义,假设我们现在输入图像image,打算做分割,那么我们可以使用端对端的全卷积网络进行训练和测试。针对这个任务,我们只需要输出pool5的feature map即可。
-
-
- vgg = vgg16.Vgg16()
- vgg.build(image)
- feature_map = vgg.pool5
- mask = yournetwork(feature_map)
-
-
三、
参考资料
https://github.com/leihe001/tensorflow-vgg
https://github.com/leihe001/tfAlexNet
https://github.com/leihe001/tensorflow-resnet