tensorflow1.x学习之 7-Dropout与优化器

原链接

Dropout与优化器

Dropout类似于bagging的思想,用来防止模型的过拟合现象,不同的优化器对模型的收敛的影响不同

import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
C:\Users\chengyuanting\.conda\envs\tensorflow19\lib\site-packages\tensorflow\python\framework\dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
C:\Users\chengyuanting\.conda\envs\tensorflow19\lib\site-packages\tensorflow\python\framework\dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
C:\Users\chengyuanting\.conda\envs\tensorflow19\lib\site-packages\tensorflow\python\framework\dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
C:\Users\chengyuanting\.conda\envs\tensorflow19\lib\site-packages\tensorflow\python\framework\dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
C:\Users\chengyuanting\.conda\envs\tensorflow19\lib\site-packages\tensorflow\python\framework\dtypes.py:521: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
C:\Users\chengyuanting\.conda\envs\tensorflow19\lib\site-packages\tensorflow\python\framework\dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
mnist = input_data.read_data_sets("MNIST",one_hot = True)
Extracting MNIST\train-images-idx3-ubyte.gz
Extracting MNIST\train-labels-idx1-ubyte.gz
Extracting MNIST\t10k-images-idx3-ubyte.gz
Extracting MNIST\t10k-labels-idx1-ubyte.gz
batch_size = 50
n_batchs = mnist.train.num_examples // batch_size
n_batchs
1100
定义输入层

两个数据的placeholder,一个dropout的参数

x = tf.placeholder(shape=[None,784],dtype=tf.float32)
y = tf.placeholder(shape=[None,10],dtype=tf.float32)
keep_prob = tf.placeholder(tf.float32)
x,y,keep_prob
(<tf.Tensor 'Placeholder:0' shape=(?, 784) dtype=float32>,
 <tf.Tensor 'Placeholder_1:0' shape=(?, 10) dtype=float32>,
 <tf.Tensor 'Placeholder_2:0' shape=<unknown> dtype=float32>)
定义隐藏层
w1 = tf.Variable(tf.zeros([784,1024]))
a1 = tf.nn.sigmoid(tf.matmul(x,w1))
o1 = tf.nn.dropout(a1,keep_prob)
w1,a1,o1
(<tf.Variable 'Variable:0' shape=(784, 1024) dtype=float32_ref>,
 <tf.Tensor 'Sigmoid:0' shape=(?, 1024) dtype=float32>,
 <tf.Tensor 'dropout/mul:0' shape=(?, 1024) dtype=float32>)
w2 = tf.Variable(tf.zeros([1024,512]))
a2 = tf.nn.sigmoid(tf.matmul(o1,w2))
o2 = tf.nn.dropout(a2,keep_prob)
w2,a2,o2
(<tf.Variable 'Variable_1:0' shape=(1024, 512) dtype=float32_ref>,
 <tf.Tensor 'Sigmoid_1:0' shape=(?, 512) dtype=float32>,
 <tf.Tensor 'dropout_1/mul:0' shape=(?, 512) dtype=float32>)
w3 = tf.Variable(tf.zeros([512,128]))
a3 = tf.nn.sigmoid(tf.matmul(o2,w3))
o3 = tf.nn.dropout(a3,keep_prob)
w3,a3,o3
(<tf.Variable 'Variable_3:0' shape=(512, 128) dtype=float32_ref>,
 <tf.Tensor 'Sigmoid_2:0' shape=(?, 128) dtype=float32>,
 <tf.Tensor 'dropout_2/mul:0' shape=(?, 128) dtype=float32>)
定义输出层
w4 = tf.Variable(tf.zeros([128,10]))
prediction = tf.nn.softmax(tf.matmul(o3,w4))
w4,prediction
(<tf.Variable 'Variable_4:0' shape=(128, 10) dtype=float32_ref>,
 <tf.Tensor 'Softmax:0' shape=(?, 10) dtype=float32>)
定义损失函数和优化器
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=prediction,labels=y))
train_step = tf.train.AdamOptimizer(0.001).minimize(loss)
WARNING:tensorflow:From <ipython-input-20-7c1707759a70>:1: softmax_cross_entropy_with_logits (from tensorflow.python.ops.nn_ops) is deprecated and will be removed in a future version.
Instructions for updating:

Future major versions of TensorFlow will allow gradients to flow
into the labels input on backprop by default.

See tf.nn.softmax_cross_entropy_with_logits_v2.

计算正确率
correct_prediction = tf.equal(tf.argmax(prediction,1),tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))
correct_prediction,accuracy
(<tf.Tensor 'Equal:0' shape=(?,) dtype=bool>,
 <tf.Tensor 'Mean_1:0' shape=() dtype=float32>)
初始化变量
import os

os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
init = tf.global_variables_initializer()
with tf.Session() as sess:
    sess.run(init)
    for epoch in range(50):
        for batch in range(n_batchs):
            batch_x,batch_y = mnist.train.next_batch(batch_size)
            sess.run([train_step],feed_dict = {x:batch_x,y:batch_y,keep_prob:0.5})
        acc, loss_value = sess.run([accuracy, loss], feed_dict = {x:mnist.test.images, y: mnist.test.labels, keep_prob:1.0})
        print("Iter: ", epoch, "Loss: ",loss_value,"Accuracy: ",acc)
Iter:  0 Loss:  1.8207536 Accuracy:  0.6533
Iter:  1 Loss:  1.6184238 Accuracy:  0.8473
Iter:  2 Loss:  1.5330971 Accuracy:  0.9298
Iter:  3 Loss:  1.520271 Accuracy:  0.9413
Iter:  4 Loss:  1.5108591 Accuracy:  0.9507
Iter:  5 Loss:  1.5073053 Accuracy:  0.9542
Iter:  6 Loss:  1.5070764 Accuracy:  0.9544
Iter:  7 Loss:  1.5024925 Accuracy:  0.959
Iter:  8 Loss:  1.4975777 Accuracy:  0.9632
Iter:  9 Loss:  1.5004102 Accuracy:  0.9601
Iter:  10 Loss:  1.5009208 Accuracy:  0.9601
Iter:  11 Loss:  1.4942816 Accuracy:  0.9671
Iter:  12 Loss:  1.4960778 Accuracy:  0.9653
Iter:  13 Loss:  1.4930174 Accuracy:  0.9683
Iter:  14 Loss:  1.4911144 Accuracy:  0.97
Iter:  15 Loss:  1.4915555 Accuracy:  0.9693
Iter:  16 Loss:  1.4900645 Accuracy:  0.9709
Iter:  17 Loss:  1.4905344 Accuracy:  0.9702
Iter:  18 Loss:  1.4893377 Accuracy:  0.9721
Iter:  19 Loss:  1.4900181 Accuracy:  0.9708
Iter:  20 Loss:  1.4876869 Accuracy:  0.9734
Iter:  21 Loss:  1.4889383 Accuracy:  0.9718
Iter:  22 Loss:  1.486607 Accuracy:  0.9744
Iter:  23 Loss:  1.4861494 Accuracy:  0.9751
Iter:  24 Loss:  1.4878614 Accuracy:  0.973
Iter:  25 Loss:  1.4853209 Accuracy:  0.9759
Iter:  26 Loss:  1.4859449 Accuracy:  0.9748
Iter:  27 Loss:  1.484467 Accuracy:  0.9761
Iter:  28 Loss:  1.485359 Accuracy:  0.9759
Iter:  29 Loss:  1.4862666 Accuracy:  0.9752
Iter:  30 Loss:  1.4857346 Accuracy:  0.9755
Iter:  31 Loss:  1.4835181 Accuracy:  0.9775
Iter:  32 Loss:  1.484398 Accuracy:  0.9767
Iter:  33 Loss:  1.4848046 Accuracy:  0.9761
Iter:  34 Loss:  1.4833093 Accuracy:  0.9775
Iter:  35 Loss:  1.4836979 Accuracy:  0.9773
Iter:  36 Loss:  1.4842311 Accuracy:  0.9766
Iter:  37 Loss:  1.485082 Accuracy:  0.976
Iter:  38 Loss:  1.4838086 Accuracy:  0.9775
Iter:  39 Loss:  1.4829488 Accuracy:  0.9783
Iter:  40 Loss:  1.4827005 Accuracy:  0.9785
Iter:  41 Loss:  1.4823732 Accuracy:  0.9788
Iter:  42 Loss:  1.4820547 Accuracy:  0.9789
Iter:  43 Loss:  1.4821239 Accuracy:  0.9788
Iter:  44 Loss:  1.4822471 Accuracy:  0.9787
Iter:  45 Loss:  1.483668 Accuracy:  0.9775
Iter:  46 Loss:  1.4829131 Accuracy:  0.9782
Iter:  47 Loss:  1.4833189 Accuracy:  0.9777
Iter:  48 Loss:  1.4830614 Accuracy:  0.978
Iter:  49 Loss:  1.4828451 Accuracy:  0.9782

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值