机器学习(4)

udacity
1、两层神经网络
第一层由一组 X 的权重和偏差组成并通过 ReLU 函数激活。 这一层的输出会提供给下一层,但是在神经网络的外部不可见,因此被称为隐藏层。
第二层由隐藏层的权重和偏差组成,隐藏层的输入即为第一层的输出,然后由 softmax 函数来生成概率。
2、反向传播比正向传播所需要的存储空间多一倍

实例分析

batch_size = 128
#添加隐藏层 注意隐藏层的权值和偏置都需要设置
hiden_nodes = 512

def computation(dataset,weight,biases,is_dropout=0,prob=0.5):
    weight_sum = tf.add(tf.matmul(dataset,weight[0]),biases[0])
    hidden_layer = tf.nn.relu(weight_sum)
    if is_dropout:
        hidden_layer = tf.nn.dropout(hidden_layer,prob)
    logits = tf.matmul(hidden_layer,weight[1])+biases[1]
    return logits

graph = tf.Graph()
with graph.as_default():

  # Input data. For the training data, we use a placeholder that will be fed
  # at run time with a training minibatch.
  tf_train_dataset = tf.placeholder(tf.float32,
                                    shape=(batch_size, image_size * image_size))
  tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
  tf_valid_dataset = tf.constant(valid_dataset)
  tf_test_dataset = tf.constant(test_dataset)
  lmda = tf.placeholder(tf.float32)
  # Variables. 设置节点偏置
  weights = [tf.Variable(
    tf.truncated_normal([image_size * image_size,hiden_nodes])), 
             tf.Variable(tf.truncated_normal([hiden_nodes,num_labels]))]

  biases = [tf.Variable(tf.zeros([hiden_nodes])),tf.Variable(tf.zeros([num_labels]))]

  #add by zerof
  logits = computation(tf_train_dataset,weights,biases)


  #
  # Training computation.
  #logits = tf.matmul(tf_train_dataset, weights) + biases 正则化loss,通过权值约束loss
  loss = tf.reduce_mean(
    tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))+lmda*(
  tf.nn.l2_loss(weights[0])+tf.nn.l2_loss(weights[1]))

  # Optimizer.
  optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)

  # Predictions for the training, validation, and test data.
  train_prediction = tf.nn.softmax(logits)
  valid_prediction = tf.nn.softmax(computation(tf_valid_dataset,weights,biases))
  test_prediction = tf.nn.softmax(computation(tf_test_dataset,weights,biases))  
#加入dropout相应修改softmax里参数,但dropout对offset相同的情况有用,当offset数量多且不同的时候会牺牲一点识别率
#改变num_steps,改变计算速度和精度,越小,越快,越不准
num_steps = 3001

with tf.Session(graph=graph) as session:
  tf.global_variables_initializer().run()
  print("Initialized")
  for step in range(num_steps):
    # Pick an offset within the training data, which has been randomized.
    # Note: we could use better randomization across epochs.改变offset,每一次循环用同一个offset,会导致训练结果100%很准,但测试结果下降
    offset = ((step%10) * batch_size) % (train_labels.shape[0] - batch_size)
    # Generate a minibatch.
    batch_data = train_dataset[offset:(offset + batch_size), :]
    batch_labels = train_labels[offset:(offset + batch_size), :]
    # Prepare a dictionary telling the session where to feed the minibatch.
    # The key of the dictionary is the placeholder node of the graph to be fed,
    # and the value is the numpy array to feed to it.
    feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels,lmda:1e-3}
    _, l, predictions = session.run(
      [optimizer, loss, train_prediction], feed_dict=feed_dict)
    if (step % 500 == 0):
      print("Minibatch loss at step %d: %f" % (step, l))
      print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
      print("Validation accuracy: %.1f%%" % accuracy(
        valid_prediction.eval(), valid_labels))
  print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Z_shsf

来包瓜子嘛,谢谢客官~

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值