神经网络之路标识别

参考了一下链接的博客
https://yq.aliyun.com/articles/67167
https://www.jianshu.com/p/d8feaddc7bdf
成功实现了神经网络模型的训练和路标图片的识别。文末附有源码。

1、环境问题

原作者在docker里配有环境,所以为了简便我直接用作者的环境。

yum install docker
docker search waleedka

这里写图片描述
然后pull下来

docker pull docker.io/waleedka/modern-deep-learning

我是之前pull的忘了,pull后的字段是什么了,大致就是docker.io/waleedka/modern-deep-learning,然后会进行镜像的下载,挺费时间的,漫长的等待。
在本地创建一个文件夹叫traffic,并赋予权限

mkdir traffic
chmod 777 traffic

然后把训练数据集合测试数据集考过来。数据集下载地址:http://btsd.ethz.ch/shareddata/
最好这样建立文件夹便于代码运行。
这里写图片描述
/root/traffic/datasets/BelgiumTS下有两个文件夹,一个是训练的数据集,一个是测试的数据集。
训练集里文件长这样:
这里写图片描述
接下来就是运行镜像了

docker run -it -p 8888:8888 -p 6006:6006 -v ~/traffic:/traffic waleedka/modern-deep-learning

注意:我的工程目录是在~/traffic下,我在我的docker中将其映射到了/traffic目录下
在镜像运行后的 /traffic 文件夹里可以看到和linux对应的文件

2、编写python脚本,, vi traffic.py
import os
import tensorflow as tf
import random
import skimage
from skimage import data, io, filters
import skimage.transform
import numpy as np

#%matplotlib inline

def load_data(data_dir):
    # Get all subdirectories of data_dir. Each represents a label.
    directories = [d for d in os.listdir(data_dir) 
                   if os.path.isdir(os.path.join(data_dir, d))]
    # Loop through the label directories and collect the data in
    # two lists, labels and images.
    labels = []
    images = []
    for d in directories:
        label_dir = os.path.join(data_dir, d)
        file_names = [os.path.join(label_dir, f) 
                      for f in os.listdir(label_dir) 
                      if f.endswith(".ppm")]
        for f in file_names:
            images.append(skimage.data.imread(f))
            labels.append(int(d))
    return images, labels

images, labels = load_data("/traffic/datasets/BelgiumTS/Training/")

for image in images[:5]:
    print("shape: {0}, min: {1}, max: {2}".format(
          image.shape, image.min(), image.max()))


# zip size 32*32
images32 = [skimage.transform.resize(image, (32, 32)) for image in images]
labels_a = np.array(labels)
images_a = np.array(images32)

print("labels: ", labels_a.shape, "\nimages: ", images_a.shape)


graph = tf.Graph()

with graph.as_default():
    # Placeholders for inputs and labels.
    images_ph = tf.placeholder(tf.float32, [None, 32, 32, 3])
    labels_ph = tf.placeholder(tf.int32, [None])

    # Flatten input from: [None, height, width, channels]
    # To: [None, height * width * channels] == [None, 3072]
    images_flat = tf.contrib.layers.flatten(images_ph)

    # Fully connected layer. 
    # Generates logits of size [None, 62]
    logits = tf.contrib.layers.fully_connected(images_flat, 62, tf.nn.relu)

    # Convert logits to label indexes (int).
    # Shape [None], which is a 1D vector of length == batch_size.
    predicted_labels = tf.argmax(logits, 1)

    # Define the loss function. 
    # Cross-entropy is a good choice for classification.
    # labels=..., logits=..., ...
    loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits,labels=labels_ph))

    # Create training op.
    train = tf.train.AdamOptimizer(learning_rate=0.001).minimize(loss)

    # And, finally, an initialization op to execute before training.
    # TODO: rename to tf.global_variables_initializer() on TF 0.12.
    init = tf.initialize_all_variables()

print("images_flat: ", images_flat)
print("logits: ", logits)
print("loss: ", loss)
print("predicted_labels: ", predicted_labels)


print("begin train")
session = tf.Session(graph=graph)

# First step is always to initialize all variables. 
# We don't care about the return value, though. It's None.
_ = session.run([init])


for i in range(201):
    _, loss_value = session.run([train, loss], 
                                feed_dict={images_ph: images_a, labels_ph: labels_a})
    if i % 10 == 0:
        print("Loss: ", loss_value)

print("use model")
sample_indexes = random.sample(range(len(images32)), 10)
sample_images = [images32[i] for i in sample_indexes]
sample_labels = [labels[i] for i in sample_indexes]

# Run the "predicted_labels" op.
predicted = session.run([predicted_labels],feed_dict={images_ph: sample_images})[0]
print(sample_labels)
print(predicted)
运行脚本python traffic.py
  • 0
    点赞
  • 11
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值