基于R1.5版本的TensorFlow+Anaconda3.6+Win。
搞了一天。算是明白是什么流程了。但是还是有很多不懂的地方。深度学习的hello world 好难。关键是训练之后。使用自己的图片测试。几乎全错,让我感觉对的那几次根本就是碰运气。
贴代码吧。原英文注释就不去掉了。说的很详细。有个关键类Estimator。训练过程中输出LOG并且保存meta data index checkpoint 文件。具体使用方法现在还不明确,我找到的方法可能已经过时了。但是最新的API还是有这些功能。
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Date : 2018-01-30 13:37:25
# @Author : ZYM
'''这是使用最新版的 high-level API,使用到了Estimator Dataset两个关键的类'''
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
# Imports
import numpy as np
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.INFO)
# Our application logic will be added here
def cnn_model_fn(features, labels, mode):
"""Model function for CNN."""
# Input Layer
# Reshape X to 4-D tensor: [batch_size, width, height, channels]
# batch_size:Size of the subset of examples to use when performing gradient descent during training
# MNIST images are 28x28 pixels, and have one color channel
# x----输入图像,我们希望能用任意张。所以batch_size用-1。如果我们feed batch=5.shape=[5,28,28,1]
#features["x"]=5*28*28.这样我们就构建好了输入层
input_layer = tf.reshape(features["x"], [-1, 28, 28, 1])
# Convolutional Layer #1
# Computes 32 features using a 5x5 filter with ReLU activation.
# Padding is added to preserve width and height.
# Input Tensor Shape: [batch_size, 28, 28, 1]
# Output Tensor Shape: [batch_size, 28, 28, 32]
conv1 = tf.layers.conv2d(
inputs=input_layer,
filters=32, #每张图卷积要取出的特征数。
kernel_size=[5, 5],#卷积大小。kernel_size=5(if w=h)
padding="same",#使用same表示输出张量和输入张量大小相同。用0去填补张量边缘
activation=tf.nn.relu #指定使用什么激活函数来激活卷积。这里我们用ReLU神经元。
)
# Pooling Layer #1
# First max pooling layer with a 2x2 filter and stride of 2
# Input Tensor Shape: [batch_size, 28, 28, 32]
# Output Tensor Shape: [batch_size, 14, 14, 32]
pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2, 2], strides=2)
# Convolutional Layer #2
# Computes 64 features using a 5x5 filter.
# Padding is added to preserve width and height.
# Input Tensor Shape: [batch_size, 14, 14, 32]
# Output Tensor Shape: [batch_size, 14, 14, 64]
conv2 = tf.layers.conv2d(
inputs=pool1,
filters=64,
kernel_size=[5, 5],
padding="same",
activation=tf.nn.relu)
# Pooling Layer #2
# Second max pooling layer with a 2x2 fil