卷积神经网络精确率不增反降_全卷积网络在KITTI上的应用-语义分割

a057efda26f3203c073aa49dca35b64e.png
  1. 前言

FCN: Fully Convolutional Network(注意不是fully connected network)

d9fd1093367d16ae77bf4078e5fe2fe1.png
2014年的论文

全卷积神经网络在语义分割上的应用。

相信如果大家看自动驾驶公司的宣传视频的话,应该会看到过各个自动驾驶公司语义分割的成果。

语义分割为什么重要?

这是因为如果分割准确率很准确的话,我们可以精确的得到物体在图像上的位置。

再延伸一下,如果图像上的像素和距离传感器适配好了的话,我们就可以精确的得到周围每个物体相对于我们的精确距离。

语义分割相对与一般的卷积神经网络的输出的对比感觉就是这样的。

0e170234686fe4ec1315edae5d92f61a.png
一般boundary box标记

4889b0f80eb695e1be93bd5721caf9fb.png
语义分割标记

就是这种感觉。总的来说就是BBox已经很准确了,但是BBox里面还是有一些我们不需要的东西。

语义分割就像在图片上自动扣好了各个目标的像素点。


2. 怎么实现的语义分割?

FCN 名字为全卷积神经网络。 其实这个词本身就很像Fully connect network也就是全连接神经网络。一个是卷积,一个是全连接。

大体的概括就是,全卷积就是把通过全连接标记好的class们,通过反卷积操作重新放大到原来的样子。因为在全连接中已经全标记好了,合理的放大带有标签的像素就可以了。

反卷积就是把卷积操作逆着来。比如卷积除了1x1的kernal以外,基本都会让原来的图层变得比原来小。而逆着这个过程来的话,就是除了1x1反卷积以外,基本所有的图层都会变大。(默认是图层的长,宽同时增大,比如长x3, 宽x3)

但最最核心的是1x1卷积操作。这个操作把,让我构建这个层,我能构建,但是要问我为啥必须是1x1卷积操作,我就不太清楚了。

说实话,到现在都没太弄明白。

1x1卷基层可以贯穿所有的channel,保留层之间的空间匹配信息等等等等。。

嗯。嗯。嗯。 字我看明白了,就是不明白什么意思。╮(╯3╰)╭

不过1x1的卷积会在很多场景使用,比如FPN里面就会利用1x1卷积然后和其他层合并。但是同样没能理解为什么非要那么做。。

7258837889643636ea06cac3126e2f3f.png
Jonathan Long FCN创始人

为了不误导,还是推荐看专业的文章。如下。

https://medium.com/@arthur_ouaknine/review-of-deep-learning-algorithms-for-image-semantic-segmentation-509a600f7b57​medium.com

3. 利用的数据集

就算不懂,照葫芦画瓢还是可以试试的。

所以利用KITTI数据集进行尝试

The KITTI Vision Benchmark Suite​www.cvlibs.net
7e092cbfdfe550a1966eea42e0171421.png

KITTI是为了自动驾驶感知测试数据集。里面有很多很多测试数据。不仅仅只是图片。建议去看看。

因为有了这些数据,有些硬件环境搭建就不那么必要了。而且,得到Ground truth 数据本身就难~~

下面这个图片就是语义分割做的相当好的例子~ 可以看到几乎所有种类的目标都被用不同的颜色标记了出来。

edda6babf842dd2d7134b9b34457a177.png
分割的很好的图片例子

只要目标被好好的标记出来,其他都好说啊~

本次uda项目只是区分了图片中的像素是否属于道路

所以区分的类,只有两种。(不过区分的类多了,代码也是差不多的)


4. Tensorflow实现

首先,FCN的前半段是一般的卷积网络,这里用了vgg16。 (全卷积版本)

训练的时候,vgg16因为是事先训练过的,所以不需要被训练。只需要提取layer3,4,7就可以了。layer7的结果其实就是全连接层的结果。把这个结果进行1x1卷积操作,然后反卷积(2,2),然后和同样1x1卷积过的layer4 合并,这个合并后的再反卷积(2,2),再和1x1卷积过的layer3 合并,。现在channel已经是3x4x4了。然后再反卷积(8,8)就成为了32倍upsample的图层。

这就是论文中阐述的结构。

7b6ac481572bce288ed57fef6f900056.png
FCN结构

现在利用python3进行实现

#!/usr/bin/env python3
import os.path
import tensorflow as tf
import helper
import warnings
from distutils.version import LooseVersion
import project_tests as tests

# Refence : https://medium.com/nanonets/how-to-do-image-segmentation-using-deep-learning-c673cc5862ef
# Refence : https://github.com/darienmt/CarND-Semantic-Segmentation-P2/blob/master/main.py
# Refence : https://medium.com/intro-to-artificial-intelligence/semantic-segmentation-udaitys-self-driving-car-engineer-nanodegree-c01eb6eaf9d

# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer.  You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))

# Check for a GPU
if not tf.test.gpu_device_name():
    warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
    print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))

def load_vgg(sess, vgg_path):
    """
    Load Pretrained VGG Model into TensorFlow.
    :param sess: TensorFlow Session
    :param vgg_path: Path to vgg folder, containing "variables/" and "saved_model.pb"
    :return: Tuple of Tensors from VGG model (image_input, keep_prob, layer3_out, layer4_out, layer7_out)
    """
    # TODO: Implement function
    #   Use tf.saved_model.loader.load to load the model and weights
    vgg_tag = 'vgg16'
    vgg_input_tensor_name = 'image_input:0'
    vgg_keep_prob_tensor_name = 'keep_prob:0'
    vgg_layer3_out_tensor_name = 'layer3_out:0'
    vgg_layer4_out_tensor_name = 'layer4_out:0'
    vgg_layer7_out_tensor_name = 'layer7_out:0'

    # We load the vgg from the file
    tf.saved_model.loader.load(sess, [vgg_tag], vgg_path)

    # Then grab the default graph.
    graph = tf.get_default_graph()

    # Then get the layer by its name.
    input_layer = graph.get_tensor_by_name(vgg_input_tensor_name)
    keep_prob_tensor = graph.get_tensor_by_name(vgg_keep_prob_tensor_name)
    layer3_out = graph.get_tensor_by_name(vgg_layer3_out_tensor_name)
    layer4_out = graph.get_tensor_by_name(vgg_layer4_out_tensor_name)
    layer7_out = graph.get_tensor_by_name(vgg_layer7_out_tensor_name)

    return input_layer, keep_prob_tensor, layer3_out, layer4_out, layer7_out
  • 定义layers函数
def layers(vgg_layer3_out, vgg_layer4_out, vgg_layer7_out, num_classes):
    """
    Create the layers for a fully convolutional network.  Build skip-layers using the vgg layers.
    :param vgg_layer3_out: TF Tensor for VGG Layer 3 output
    :param vgg_layer4_out: TF Tensor for VGG Layer 4 output
    :param vgg_layer7_out: TF Tensor for VGG Layer 7 output
    :param num_classes: Number of classes to classify
    :return: The Tensor for the last layer of output
    """
    # TODO: Implement function
    # Each regulizer should be implemented on each line of code to each layer.
    # In this step, tf can get enough
    conv_1x1 = tf.layers.conv2d(vgg_layer7_out, num_classes, 1, padding = 'same',
                                kernel_regularizer = tf.contrib.layers.l2_regularizer(1e-3), name = 'conv_1x1_7th_layer')

    # Upsampling step. using conv2d transpose function. Kernel is 4 and stride is 2 .
    # Layer size becomes 2x2
    x2_conv7 = tf.layers.conv2d_transpose(conv_1x1, num_classes, 4, strides = (2, 2), padding = 'same',
             kernel_regularizer = tf.contrib.layers.l2_regularizer(1e-3), name = 'x2_conv7')

    # As paper mentioned, we should do skip connection.
    # First make a 1x1 convolution of pool.
    pool_4_1x1 = tf.layers.conv2d(vgg_layer4_out, num_classes, 1, padding = 'same',
                                   kernel_regularizer = tf.contrib.layers.l2_regularizer(1e-3), name = 'pool_4_1x1')

    # Adding skip layer for next step. First skip layer.
    skip1 = tf.add(x2_conv7, pool_4_1x1, name = 'skip1')

    # At this time, layer's size becomes 2x2.
    # Continuing upsampling. from skip_7_2x2_pool4_1x1 to
    upsampled_skip1 = tf.layers.conv2d_transpose(skip1, num_classes, 4, strides = (2, 2), padding = 'same',
                                                kernel_regularizer=tf.contrib.layers.l2_regularizer(1e-3), name='4x_conv7')

    # Get conv pool3 layer.
    # The size of pool3_1x1 is 4x4 as the pool3
    pool3_1x1 = tf.layers.conv2d(vgg_layer3_out, num_classes, 1, padding = 'same',
                                 kernel_regularizer=tf.contrib.layers.l2_regularizer(1e-3), name='pool3_1x1')

    # So we make a skip layer2.
    skip2 = tf.add(pool3_1x1, upsampled_skip1, name = 'skip2')

    # We should upsample with stride (8,8) as paper mentioned.
    x32_upsampled = tf.layers.conv2d_transpose(skip2, num_classes, 16, strides = (8,8), padding = 'same',
                                               kernel_regularizer=tf.contrib.layers.l2_regularizer(1e-3), name='32x_upsampled')
    return x32_upsampled
  • 定义optimize函数

通过神经网络最后的输出,正确的标签,学习速率,分类的数量,得到logits, train_op, loss_op等参数和函数。之后训练的时候 ,会调用optimize函数对模型进行优化。

def optimize(nn_last_layer, correct_label, learning_rate, num_classes):
    """
    Build the TensorFLow loss and optimizer operations.
    :param nn_last_layer: TF Tensor of the last layer in the neural network
    :param correct_label: TF Placeholder for the correct label image
    :param learning_rate: TF Placeholder for the learning rate
    :param num_classes: Number of classes to classify
    :return: Tuple of (logits, train_op, cross_entropy_loss)
    """
    # TODO: Implement function
    # Reshape 4D tensors to 2D, each row represents a pixel, each column a class. origin tensor is 4d which means batch, height, weight, channel
    logits = tf.reshape(nn_last_layer, (-1, num_classes), name = "fcn_logits")
    correct_label_reshaped = tf.reshape(correct_label, (-1, num_classes))

    # Calculate values from actual labels using cross entropy
    cross_entopy = tf.nn.softmax_cross_entropy_with_logits(logits = logits, labels = correct_label_reshaped[:])

    # Take mean for total loss
    loss_op = tf.reduce_mean(cross_entopy, name = "fcn_loss")

    # The model implementes this operation to fin d the weights/parameters that would yield correct pixel labels.
    # Define train operation
    train_op = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(loss_op, name = "fcn_train_op")

    return logits, train_op, loss_op
  • 定义train_nn 函数
def train_nn(sess, epochs, batch_size, get_batches_fn, train_op, cross_entropy_loss, input_image,
             correct_label, keep_prob, learning_rate):
    """
    Train neural network and print out the loss during training.
    :param sess: TF Session
    :param epochs: Number of epochs
    :param batch_size: Batch size
    :param get_batches_fn: Function to get batches of training data.  Call using get_batches_fn(batch_size)
    :param train_op: TF Operation to train the neural network
    :param cross_entropy_loss: TF Tensor for the amount of loss
    :param input_image: TF Placeholder for input images
    :param correct_label: TF Placeholder for label images
    :param keep_prob: TF Placeholder for dropout keep probability
    :param learning_rate: TF Placeholder for learning rate
    """
    # TODO: Implement function
    keep_prob_value = 0.5
    learning_rate_value = 0.0001
    sum_of_loss = 0.0
    for epoch in range(epochs):
        print("**************************")
        print("Strat epoch {} ...".format(epoch + 1))
        for image, label in get_batches_fn(batch_size):
            loss, _ = sess.run([cross_entropy_loss, train_op], feed_dict = {input_image: image, correct_label:label,
                                                                            keep_prob: keep_prob_value, learning_rate: learning_rate_value})
            sum_of_loss += loss;
            print(loss)

        print("Epoch {} ...".format(epoch +1))
        print("Total loss = {:.3f}".format(sum_of_loss))
        print("------------------------")
  • 定义 tensorflow run()函数

这里面涉及到各种参数的设定。这里会调用之前定义的layer函数(就是设定各种卷基层的那部分内容),optimize函数来设定训练多少次,用什么方法训练和评估训练结果的参数是什么等等。

最后再利用train_nn函数进行训练操作。

def run():
    num_classes = 2
    IMAGE_SHAPE = (160, 576)  # KITTI dataset uses 160x576 images
    data_dir = '/data'
    runs_dir = './runs2'
    save_model_path = './saver/model'

    # Set parameters
    EPOCHS = 100
    BATCH_SIZE = 32
    DROPOUT = 0.75

    # Set placeholder
    correct_label = tf.placeholder(tf.float32, [None, IMAGE_SHAPE[0], IMAGE_SHAPE[1], num_classes])
    learning_rate = tf.placeholder(tf.float32)
    keep_prob = tf.placeholder(tf.float32)

    tests.test_for_kitti_dataset(data_dir)

    # Download pretrained vgg model
    helper.maybe_download_pretrained_vgg(data_dir)

    # OPTIONAL: Train and Inference on the cityscapes dataset instead of the Kitti dataset.
    # You'll need a GPU with at least 10 teraFLOPS to train on.
    #  https://www.cityscapes-dataset.com/

    with tf.Session() as sess:
        # Path to vgg model
        vgg_path = os.path.join(data_dir, 'vgg')
        # Create function to get batches
        get_batches_fn = helper.gen_batch_function(os.path.join(data_dir, 'data_road/training'), IMAGE_SHAPE)

        # OPTIONAL: Augment Images for better results
        #  https://datascience.stackexchange.com/questions/5224/how-to-prepare-augment-images-for-neural-network

        # TODO: Build NN using load_vgg, layers, and optimize function
        input_image, keep_prob, layer3_out, layer4_out, layer7_out = load_vgg(sess, vgg_path)

        # This is the output layer of the fcn. This is the 32x upsampling.
        layers_output = layers(layer3_out, layer4_out, layer7_out, num_classes)

        # Get the optimize function
        # optimize(nn_last_layer, correct_label, learning_rate, num_classes)
        # return logits, train_op, loss_op
        logits, train_op, cross_entropy = optimize(layers_output, correct_label, learning_rate, num_classes)

        #initialize all variables
        sess.run(tf.global_variables_initializer())
        sess.run(tf.local_variables_initializer())

        print("Tensorflow gragh build successfully, Start training .")
        #nn_last_layer, correct_label, learning_rate, num_classes
        # TODO: Train NN using the train_nn function
        # The all of the parameters in train_nn is a tensor node for trainning. Even some are functions, this is also for construct
        # tensor graph. Such as the train_op, cross_entropy input_image, correct_label keep_prob, learning_rate, these are all tensornode.
        # These are not the specified value. The specified value will be feed when the session start to run.
        train_nn(sess, EPOCHS, BATCH_SIZE, get_batches_fn, train_op, cross_entropy, input_image, correct_label, keep_prob, learning_rate)

        # TODO: Save inference data using helper.save_inference_samples
        #  helper.save_inference_samples(runs_dir, data_dir, sess, image_shape, logits, keep_prob, input_image)
        helper.save_inference_samples(runs_dir, data_dir, sess, IMAGE_SHAPE, logits, keep_prob, input_image)
        print("Finished!")
        # OPTIONAL: Apply the trained model to a video

if __name__ == '__main__':
    run()
  • 代码传送门
Fred159/FCN-Semantic-segmentation-CarND​github.com
12d47ea79d6c61a63f670e6411eb26e2.png

5. 复现结果

复现结果如下。感觉还可以。大部分还是区分的比较好的。只不过,越是边缘,标记效果就越差。。可能还需要和各个层通过1x1卷积进行深度结合把。

o(︶︿︶)o 唉

d167cef18d4f63a5fe1dc822f9e256c1.png

2e736205ce0ccc58b4f096f7520d4e47.png

1289b988c0b4102d6331cd0d67e1ccc0.png

6. 结论

今天简单记录了一下自己都不太懂的FCN。

AI还是很强大的。他的训练时间也比较长。

这个东西训练了好久好久。。。。●﹏●

20190607

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值