TensorFlow官方文档学习|TensorFlow运作方式入门

认识MINIST数据集

 目的
data_sets.train55000个图像和标签(labels),作为主要训练集。
data_sets.validation5000个图像和标签,用于迭代验证训练准确度。
data_sets.test10000个图像和标签,用于最终测试训练准确度(trained accuracy)。

from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)

print (mnist.train.images.shape)
print (mnist.train.labels.shape)
print (mnist.validation.images.shape)
print (mnist.validation.labels.shape)
print (mnist.test.images.shape)
print (mnist.test.labels.shape)

输出结果:

Extracting MNIST_data/train-images-idx3-ubyte.gz  #训练集图片 - 55000 张 训练图片, 5000 张 验证图片
Extracting MNIST_data/train-labels-idx1-ubyte.gz  #训练集图片对应的数字标签
Extracting MNIST_data/t10k-images-idx3-ubyte.gz   #测试集图片 - 10000 张 图片
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz   #测试集图片对应的数字标签
(55000, 784) #训练集图像的shape
(55000, 10)  #训练集图像标签的shape
(5000, 784)  #验证集图像的shape
(5000, 10)   #验证集图像标签的shape
(10000, 784) #测试集图像的shape
(10000, 10)  #测试集图像标签的shape

将数据存为变量,查看数据:

val_data=mnist.validation.images
val_label=mnist.validation.labels

print('验证集图像:\n',val_data)
print('验证集标签:\n',val_label)

输出结果:

验证集图像:
 [[ 0.  0.  0. ...,  0.  0.  0.]
 [ 0.  0.  0. ...,  0.  0.  0.]
 [ 0.  0.  0. ...,  0.  0.  0.]
 ..., 
 [ 0.  0.  0. ...,  0.  0.  0.]
 [ 0.  0.  0. ...,  0.  0.  0.]
 [ 0.  0.  0. ...,  0.  0.  0.]]
验证集标签:
 [[ 0.  0.  0. ...,  0.  0.  0.]
 [ 1.  0.  0. ...,  0.  0.  0.]
 [ 0.  0.  0. ...,  0.  0.  0.]
 ..., 
 [ 0.  0.  1. ...,  0.  0.  0.]
 [ 0.  1.  0. ...,  0.  0.  0.]
 [ 0.  0.  1. ...,  0.  0.  0.]]

构建图表 (Build the Graph)

在为数据创建占位符之后,就可以运行mnist.py文件,经过三阶段的模式函数操作:inference()loss(),和training()。图表就构建完成了。

1.inference() —— 尽可能地构建好图表,满足促使神经网络向前反馈并做出预测的要求。

2.loss() —— 往inference图表中添加生成损失(loss)所需要的操作(ops)。

3.training() —— 往损失图表中添加计算并应用梯度(gradients)所需的操作。


推理(Inference)

def inference(images, hidden1_units, hidden2_units):
  """构建图表.它接受图像占位符为输入,
     在此基础上借助ReLu(Rectified Linear Units)激活函数,
     构建一对完全连接层(layers),
     以及一个有着十个节点(node)、指明了输出logtis模型的线性层。
  Args:
    images: 接受图像占位符为输入
    hidden1_units: 第一个隐层的大小
    hidden2_units: 第二个隐层的大小
  Returns:
    softmax_linear: 包含预测结果的tensor
  """
  
 # Hidden 1
 with tf.name_scope('hidden1'):
 weights = tf.Variable(
        tf.truncated_normal([IMAGE_PIXELS, hidden1_units],
                            stddev=1.0 / math.sqrt(float(IMAGE_PIXELS))),
        name='weights')
    biases = tf.Variable(tf.zeros([hidden1_units]),
                         name='biases')
    hidden1 = tf.nn.relu(tf.matmul(images, weights) + biases)
  # Hidden 2
  with tf.name_scope('hidden2'):
    weights = tf.Variable(
        tf.truncated_normal([hidden1_units, hidden2_units],
                            stddev=1.0 / math.sqrt(float(hidden1_units))),
        name='weights')
    biases = tf.Variable(tf.zeros([hidden2_units]),
                         name='biases')
    hidden2 = tf.nn.relu(tf.matmul(hidden1, weights) + biases)
  # Linear
  with tf.name_scope('softmax_linear'):
    weights = tf.Variable(
        tf.truncated_normal([hidden2_units, NUM_CLASSES],
                            stddev=1.0 / math.sqrt(float(hidden2_units))),
        name='weights')
    biases = tf.Variable(tf.zeros([NUM_CLASSES]),
                         name='biases')
    logits = tf.matmul(hidden2, weights) + biases
  return logits

每一层都创建于一个唯一的tf.name_scope之下,创建于该作用域之下的所有元素都将带有其前缀。

with tf.name_scope('hidden1'):
例如,当这些层是在 hidden1作用域下生成时,赋予权重变量的独特名称将会是" hidden1/weights"。


在定义的作用域中,每一层所使用的权重和偏差都在tf.Variable实例中生成,并且包含了各自期望的shape。

每个变量在构建时,都会获得初始化操作(initializer ops)。

  • 对weights,通过tf.truncated_normal函数初始化权重变量,赋予的shape则是一个二维tensor,其中第一个维度代表该层中权重变量所连接(connect from)的单元数量:输入图像的大小,第二个维度代表该层中权重变量所连接到的(connect to)单元数量:第二个隐层的单元数量。tf.truncated_normal初始函数将根据所得到的均值和标准差,生成一个随机分布。
  • 通过tf.zeros函数初始化偏差变量(biases),确保所有偏差的起始值都是0,而它们的shape则是其在该层中所接到的(connect to)单元数量。

  # Hidden 1
  with tf.name_scope('hidden1'):
    weights = tf.Variable(
        tf.truncated_normal([IMAGE_PIXELS, hidden1_units],
                            stddev=1.0 / math.sqrt(float(IMAGE_PIXELS))),
        name='weights')
    biases = tf.Variable(tf.zeros([hidden1_units]),
                         name='biases')
    hidden1 = tf.nn.relu(tf.matmul(images, weights) + biases)

定义了每层的权重和偏置后,使用激活函数(选取ReLU)连接各层。

hidden1 = tf.nn.relu(tf.matmul(images, weights) + biases)

hidden2 = tf.nn.relu(tf.matmul(hidden1, weights) + biases)

logits = tf.matmul(hidden2, weights) + biases


损失(Loss)

def loss(logits, labels):
  """通过添加所需的损失(输出层与label间)操作,进一步构建图表。
  Args:
    logits: Logits tensor, float - [batch_size, NUM_CLASSES].
    labels: Labels tensor, int32 - [batch_size].
  Returns:
    loss: Loss tensor of type float.
  """
  labels = tf.to_int64(labels)
  cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(
      labels=labels, logits=logits, name='xentropy')
  return tf.reduce_mean(cross_entropy, name='xentropy_mean')

tf.nn.sparse_softmax_cross_entropy_with_logits()函数计算输出层logits的softmax,

然后将其与labels求交叉熵。

tf.reduce_mean()求cross_entropy的平均值,将其作为总损失。

tensorflow中有一类在tensor的某一维度上求值的函数。如:

求最大值tf.reduce_max(input_tensor, reduction_indices=None, keep_dims=False, name=None)

求平均值tf.reduce_mean(input_tensor, reduction_indices=None, keep_dims=False, name=None)

参数(1)input_tensor:待求值的tensor。

参数(2)reduction_indices:在哪一维上求解。

参数(3)(4)可忽略

训练(Training)

def training(loss, learning_rate):
  ""设置用于训练的Op
  Creates a summarizer to track the loss over time in TensorBoard.
  创建优化程序并为所有变量使用梯度下降算法。
  函数返回的Op是在启动会话时使模型开始训练的Op。
  Args:
    loss: Loss tensor, from loss().
    learning_rate: The learning rate to use for gradient descent.
  Returns:
    train_op: The Op for training.
  """
  # Add a scalar summary for the snapshot loss.
  tf.summary.scalar('loss', loss)
  # 实例化一个GradientDescentOptimizer,按照一定的学习速率应用梯度下降算法
  optimizer = tf.train.GradientDescentOptimizer(learning_rate)
  # 生成一个变量用于保存全局训练步骤(global training step)的数值
  global_step = tf.Variable(0, name='global_step', trainable=False)
  # 使用minimize()函数更新系统中的权重以使得loss变小、并作为增加全局步骤的计数器
  train_op = optimizer.minimize(loss, global_step=global_step)
  return train_op

评估(Evaluation)

def evaluation(logits, labels):
  """Evaluate the quality of the logits at predicting the label.
  Args:
    logits: Logits tensor, float - [batch_size, NUM_CLASSES].
    labels: Labels tensor, int32 - [batch_size], with values in the
      range [0, NUM_CLASSES).
  Returns:
    A scalar int32 tensor with the number of examples (out of batch_size)
    that were predicted correctly.
  """
  # For a classifier model, we can use the in_top_k Op.
  # It returns a bool tensor with shape [batch_size] that is true for
  # the examples where the label is in the top k (here k=1)
  # of all logits for that example.
  correct = tf.nn.in_top_k(logits, labels, 1)
  # Return the number of true entries.
  return tf.reduce_sum(tf.cast(correct, tf.int32))

  • tf.nn.in_top_k(predictions, targets, k, name=None):这个函数也是专门用于分类的,意思是说:“对样本所属的类别进行估计,得到结果predictions,但是样本真正所属的类别是targets,该函数就判断targets所代表的类别在predictions中是否排在前k个,如果是,那就返回true“。
  • tf.cast(x, dtype, name=None):将x或者x.values转换为dtype。
    # tensor a is [1.8, 2.2], dtype=tf.float
    tf.cast(a, tf.int32) ==> [1, 2] # dtype=tf.int32
  • tf.reduce_sum(input_tensor, reduction_indices=None, keep_dims=False, name=None):计算输入tensor元素的和,或者按照reduction_indices指定的轴进行求和。
    # ‘x’ is [[1, 1, 1]
    # [1, 1, 1]]
    tf.reduce_sum(x) ==> 6


训练模型

一旦图表构建完毕,就通过fully_connected_feed.py文件中的用户代码进行循环地迭代式训练和评估。

def run_training():
  """Train MNIST for a number of steps."""
  # 获得数据到data_sets中.
  data_sets = input_data.read_data_sets(FLAGS.input_data_dir, FLAGS.fake_data)

  # 将所有已构建的操作与tf的默认图关联起来
  with tf.Graph().as_default():
    # Generate placeholders for the images and labels.
    images_placeholder, labels_placeholder = placeholder_inputs(
        FLAGS.batch_size)

    # Build a Graph that computes predictions from the inference model.
    logits = mnist.inference(images_placeholder,
                             FLAGS.hidden1,
                             FLAGS.hidden2)

    # Add to the Graph the Ops for loss calculation.
    loss = mnist.loss(logits, labels_placeholder)

    # Add to the Graph the Ops that calculate and apply gradients.
    train_op = mnist.training(loss, FLAGS.learning_rate)

    # Add the Op to compare the logits to the labels during evaluation.
    eval_correct = mnist.evaluation(logits, labels_placeholder)

    # Build the summary Tensor based on the TF collection of Summaries.
    summary = tf.summary.merge_all()

    # Add the variable initializer Op.
    init = tf.global_variables_initializer()

    # Create a saver for writing training checkpoints.
    saver = tf.train.Saver()

    # Create a session for running Ops on the Graph.
    sess = tf.Session()

    # Instantiate a SummaryWriter to output summaries and the Graph.
    summary_writer = tf.summary.FileWriter(FLAGS.log_dir, sess.graph)

    # And then after everything is built:

    # Run the Op to initialize the variables.
    sess.run(init)

    # 启动训练循环
    for step in xrange(FLAGS.max_steps):
      start_time = time.time()

      # Fill a feed dictionary with the actual set of images and labels
      # for this particular training step.
      feed_dict = fill_feed_dict(data_sets.train,
                                 images_placeholder,
                                 labels_placeholder)

      # Run one step of the model.  The return values are the activations
      # from the `train_op` (which is discarded) and the `loss` Op.  To
      # inspect the values of your Ops or variables, you may include them
      # in the list passed to sess.run() and the value tensors will be
      # returned in the tuple from the call.
      _, loss_value = sess.run([train_op, loss],
                               feed_dict=feed_dict)

      duration = time.time() - start_time

      # Write the summaries and print an overview fairly often.
      if step % 100 == 0:
        # Print status to stdout.
        print('Step %d: loss = %.2f (%.3f sec)' % (step, loss_value, duration))
        # Update the events file.
        summary_str = sess.run(summary, feed_dict=feed_dict)
        summary_writer.add_summary(summary_str, step)
        summary_writer.flush()

      # Save a checkpoint and evaluate the model periodically.
      if (step + 1) % 1000 == 0 or (step + 1) == FLAGS.max_steps:
        checkpoint_file = os.path.join(FLAGS.log_dir, 'model.ckpt')
        saver.save(sess, checkpoint_file, global_step=step)
        # Evaluate against the training set.
        print('Training Data Eval:')
        do_eval(sess,
                eval_correct,
                images_placeholder,
                labels_placeholder,
                data_sets.train)
        # Evaluate against the validation set.
        print('Validation Data Eval:')
        do_eval(sess,
                eval_correct,
                images_placeholder,
                labels_placeholder,
                data_sets.validation)
        # Evaluate against the test set.
        print('Test Data Eval:')
        do_eval(sess,
                eval_correct,
                images_placeholder,
                labels_placeholder,
                data_sets.test)

像Graph中提供数据

TensorFlow的feed机制可以在应用运行时向Graph输入数据。

在每一步训练过程中,首先会根据训练数据生成一个feed dictionary,这里面会包含本次循环中使用到的训练数据集。

 feed_dict = fill_feed_dict(data_sets.train,
                                 images_placeholder,
                                 labels_placeholder)

fill_feed_dict()函数:

def fill_feed_dict(data_set, images_pl, labels_pl):
  """Fills the feed_dict for training the given step.
  A feed_dict takes the form of:
  Args:
    data_set: The set of images and labels, from input_data.read_data_sets()
    images_pl: The images placeholder, from placeholder_inputs().
    labels_pl: The labels placeholder, from placeholder_inputs().
  Returns:
    feed_dict: The feed dictionary mapping from placeholders to values.
  """
  # fill_feed_dict函数会查询给定的DataSet,索要下一批次batch_size的图像和标签,
  # 与占位符相匹配的Tensor则会包含下一批次的图像和标签。
  images_feed, labels_feed = data_set.next_batch(FLAGS.batch_size,
                                                 FLAGS.fake_data)
  # 然后,以占位符为哈希键,创建一个Python字典对象,键值则是其代表的反馈Tensor。
 feed_dict = {
      images_pl: images_feed,
      labels_pl: labels_feed,
  }
  # 这个字典随后作为feed_dict参数,传入sess.run()函数中,为这一步的训练提供输入样例。
 return feed_dict

评估模型

每隔一千个训练步骤,我们的代码会尝试使用训练数据集与测试数据集,对模型进行评估。

do_eval函数会被调用三次,分别使用训练数据集、验证数据集合测试数据集。

 # Evaluate against the training set.
        print('Training Data Eval:')
        do_eval(sess,
                eval_correct,
                images_placeholder,
                labels_placeholder,
                data_sets.train)
        # Evaluate against the validation set.
        print('Validation Data Eval:')
        do_eval(sess,
                eval_correct,
                images_placeholder,
                labels_placeholder,
                data_sets.validation)
        # Evaluate against the test set.
        print('Test Data Eval:')
        do_eval(sess,
                eval_correct,
                images_placeholder,
                labels_placeholder,
                data_sets.test)

do_eval()函数:

def do_eval(sess,
            eval_correct,
            images_placeholder,
            labels_placeholder,
            data_set):
  """Runs one evaluation against the full epoch of data.
  Args:
    sess: The session in which the model has been trained.
    eval_correct: The Tensor that returns the number of correct predictions.
    images_placeholder: The images placeholder.
    labels_placeholder: The labels placeholder.
    data_set: The set of images and labels to evaluate, from
      input_data.read_data_sets().
  """
  # And run one epoch of eval.
  true_count = 0  
  steps_per_epoch = data_set.num_examples // FLAGS.batch_size
  num_examples = steps_per_epoch * FLAGS.batch_size # 数据数量
  for step in xrange(steps_per_epoch):
    feed_dict = fill_feed_dict(data_set,
                               images_placeholder,
                               labels_placeholder)
    true_count += sess.run(eval_correct, feed_dict=feed_dict) # 预测正确的数量
  precision = float(true_count) / num_examples #准确率
  print('  Num examples: %d  Num correct: %d  Precision @ 1: %0.04f' %
        (num_examples, true_count, precision))

使用tensorboard没有成功,暂时放弃。(2017/3/13)

/root/anaconda3/lib/python3.5

/root/anaconda3/lib/python3.5/site-packages/tensorflow/tensorboard/tensorboard.py

/root/anaconda3/lib/python3.5/python /root/anaconda3/lib/python3.5/site-packages/tensorflow/tensorboard/tensorboard.py --logdir=/tmp/tensorflow/mnist/logs/fully_connected_feed

参考文献

  1. tf.reduce_mean()函数说明
  2. tf.nn.in_top_k()函数说明
  3. TensorFlow基本概念与常用函数


  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值