tensorflow官方教程: 建立新的卷积神经网络

tensorflow官方教程: 建立新的卷积神经网络


本文主要包含如下内容:

  本篇博客来自原始英文教程原始教程,你可以参照原始教程以及本篇博客进行学习。
  官方原始教程中首先介绍了一般CNN网络,即卷积神经网络。若不清楚的你可以自行阅读原文,这里,我们之间进入重点,即建立自己的CNN分类器。


建立CNN MNIST分类器


  TensorFlow layers模块提供了一个高级的API,该模块可以很容易地构建一个神经网络。

  • conv2d(). Constructs a two-dimensional convolutional layer. Takes number of filters, filter kernel size, padding, and activation function as arguments.
  • max_pooling2d(). Constructs a two-dimensional pooling layer using the max-pooling algorithm. Takes pooling filter size and stride as arguments.
  • dense(). Constructs a dense layer. Takes number of neurons and activation function as arguments.

  这些设置方法以张量作为输入,并且输出张量。因此,可以容易地将一个层连接到另一个层。具体代码实现如下:

def cnn_model_fn(features, labels, mode):
  """Model function for CNN."""
  # Input Layer
  # Reshape X to 4-D tensor: [batch_size, width, height, channels]
  # MNIST images are 28x28 pixels, and have one color channel
  input_layer = tf.reshape(features["x"], [-1, 28, 28, 1])      # 将输入特征映射成规定形状

  # Convolutional Layer #1
  # Computes 32 features using a 5x5 filter with ReLU activation.
  # Padding is added to preserve width and height.
  # Input Tensor Shape: [batch_size, 28, 28, 1]
  # Output Tensor Shape: [batch_size, 28, 28, 32]
  conv1 = tf.layers.conv2d(         # 定义卷积网络
      inputs=input_layer,
      filters=32,                   # 滤波器组数量
      kernel_size=[5, 5],           # 滤波器尺寸
      padding="same",               # 两个值:valid 或 same 
      activation=tf.nn.relu)        # 激活函数

  # Pooling Layer #1
  # First max pooling layer with a 2x2 filter and stride of 2
  # Input Tensor Shape: [batch_size, 28, 28, 32]
  # Output Tensor Shape: [batch_size, 14, 14, 32]
  pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2, 2], strides=2)            # 池化层

  # Convolutional Layer #2
  # Computes 64 features using a 5x5 filter.
  # Padding is added to preserve width and height.
  # Input Tensor Shape: [batch_size, 14, 14, 32]
  # Output Tensor Shape: [batch_size, 14, 14, 64]
  conv2 = tf.layers.conv2d(
      inputs=pool1,
      filters=64,
      kernel_size=[5, 5],
      padding="same",
      activation=tf.nn.relu)

  # Pooling Layer #2
  # Second max pooling layer with a 2x2 filter and stride of 2
  # Input Tensor Shape: [batch_size, 14, 14, 64]
  # Output Tensor Shape: [batch_size, 7, 7, 64]
  pool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=[2, 2], strides=2)

  # Flatten tensor into a batch of vectors
  # Input Tensor Shape: [batch_size, 7, 7, 64]
  # Output Tensor Shape: [batch_size, 7 * 7 * 64]
  pool2_flat = tf.reshape(pool2, [-1, 7 * 7 * 64])      # 扁平化,只有两个维度

  # Dense Layer
  # Densely connected layer with 1024 neurons
  # Input Tensor Shape: [batch_size, 7 * 7 * 64]
  # Output Tensor Shape: [batch_size, 1024]
  dense = tf.layers.dense(inputs=pool2_flat, units=1024, activation=tf.nn.relu)     # 全连接层

  # Add dropout operation; 0.6 probability that element will be kept
  dropout = tf.layers.dropout(      # dropout层,防止过拟合
      inputs=dense, rate=0.4, training=mode == tf.estimator.ModeKeys.TRAIN)     # 表明只有在训练中使用

  # Logits layer
  # Input Tensor Shape: [batch_size, 1024]
  # Output Tensor Shape: [batch_size, 10]
  logits = tf.layers.dense(inputs=dropout, units=10)

  predictions = {       # 字典
      # Generate predictions (for PREDICT and EVAL mode)
      "classes": tf.argmax(input=logits, axis=1),       # 找到最大值,即输出结果
      # Add `softmax_tensor` to the graph. It is used for PREDICT and by the
      # `logging_hook`.
      "probabilities": tf.nn.softmax(logits, name="softmax_tensor")     # 预测每一类的概率
  }
  if mode == tf.estimator.ModeKeys.PREDICT:
    return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions)       # 返回EstimatorSpec对象

  # Calculate Loss (for both TRAIN and EVAL modes)
  loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits)       # 计算交叉熵损失

  # Configure the Training Op (for TRAIN mode)
  if mode == tf.estimator.ModeKeys.TRAIN:       # 使用学习率为0.001随机梯度下降发优化
    optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001)
    train_op = optimizer.minimize(
        loss=loss,
        global_step=tf.train.get_global_step())
    return tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op)  # 返回EstimatorSpec对象

  # Add evaluation **metrics** (for EVAL mode)
  eval_metric_ops = {
      "accuracy": tf.metrics.accuracy(      # 添加评估准确率,评估指标
          labels=labels, predictions=predictions["classes"])}
  return tf.estimator.EstimatorSpec(
      mode=mode, loss=loss, eval_metric_ops=eval_metric_ops)

培训和评估CNN MNIST分类器


  在建立好CNN模型之后,准备对我们的模型进行训练和评估。首先,我们加载我们的训练和测试数据。

def main(unused_argv):
  # Load training and eval data
  mnist = tf.contrib.learn.datasets.load_dataset("mnist")
  train_data = mnist.train.images  # Returns np.array
  train_labels = np.asarray(mnist.train.labels, dtype=np.int32)
  eval_data = mnist.test.images  # Returns np.array
  eval_labels = np.asarray(mnist.test.labels, dtype=np.int32)

  接下来,让我们创建 API Estimator 训练评估我们的模型。其中,model_fn参数指定用于训练,评估和预测的模型函数; model_dir参数指定将保存模型数据(检查点)的目录。

 # Create the Estimator
  mnist_classifier = tf.estimator.Estimator(
      model_fn=cnn_model_fn, model_dir="/tmp/mnist_convnet_model")

  随后,我们设置日志函数记录在训练时的分类类别打分:即(softmax_tensor),其中 every_n_iter 指定每训练50步训练记录一次概率。

# Set up logging for predictions
  # Log the values in the "Softmax" tensor with label "probabilities"
  tensors_to_log = {"probabilities": "softmax_tensor"}
  logging_hook = tf.train.LoggingTensorHook(        # LoggingTensorHook传递相关参数
      tensors=tensors_to_log, every_n_iter=50)

  在完成好模型设定,读取数据,以及日志设置之后,我们开始训练自己的模型。

  train_input_fn = tf.estimator.inputs.numpy_input_fn(
      x={"x": train_data},
      y=train_labels,
      batch_size=100,
      num_epochs=None,      # 将训练达到指定步数
      shuffle=True)         # 随机输入数据
  mnist_classifier.train(
      input_fn=train_input_fn,
      steps=20000,              # 训练20000步数
      hooks=[logging_hook])     # 触发日志函数

  最后评估我们的模型。

# Evaluate the model and print results
  eval_input_fn = tf.estimator.inputs.numpy_input_fn(
      x={"x": eval_data},
      y=eval_labels,
      num_epochs=1,
      shuffle=False)        # 循环遍历数据
  eval_results = mnist_classifier.evaluate(input_fn=eval_input_fn)
  print(eval_results)

  结果显示如下:

INFO:tensorflow:Restoring parameters from /tmp/mnist_convnet_model/model.ckpt-20000
INFO:tensorflow:Finished evaluation at 2018-01-08-14:23:56
INFO:tensorflow:Saving dict for global step 20000: accuracy = 0.9698, global_step = 20000, loss = 0.101203
{'loss': 0.10120341, 'global_step': 20000, 'accuracy': 0.9698}
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值