MorvanTest17_Dropout

trouble:

code:

    Wx_plus_b = tf.nn.dropout(Wx_plus_b , , , , , rate=None)###Dropout主函数

error:

  File "<ipython-input-3-6a568e0de851>", line 6
    Wx_plus_b = tf.nn.dropout(Wx_plus_b , , , , , rate=None)###Dropout主函数
                                          ^
SyntaxError: invalid syntax

code:

    Wx_plus_b = tf.nn.dropout(Wx_plus_b , rate=None)###Dropout主函数

error:

ValueError: You must provide a rate to dropout.

code:

    Wx_plus_b = tf.nn.dropout(Wx_plus_b , rate)###Dropout主函数

warning:

Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.

code:

    Wx_plus_b = tf.nn.dropout(Wx_plus_b , rate=0)###Dropout主函数

output:

rate: Tensor("Placeholder:0", dtype=float32) None

在这里插入图片描述
5.
code:

    print("keep_prob" , keep_prob , sess.run(train_step , feed_dict={xs:X_train ,ys:y_train , keep_prob:0.5}))

output:

keep_prob Tensor("Placeholder:0", dtype=float32) None

code:

    print("keep_prob:" , keep_prob ,sess.run(keep_prob , feed_dict={keep_prob:0.5}))

output:

keep_prob: Tensor("Placeholder:0", dtype=float32) 0.5

sklearn.cross_validation被废弃,改为sklearn.model_selection

import tensorflow as tf
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer

加载数据,和以前不一样

X=digits.data是0到9的数字图片,y=digits.target是标签

# load data
digits = load_digits()
X = digits.data
y = digits.target
#LableBinarizer()将标签二值化,转换为数字标签,也适用于两类以上的标签;
#fit_transform()使标签标准化
y = LabelBinarizer().fit_transform(y)
#train_test_split(data , target , test_size= [, random_state=]),test_size是test数据占比
X_train , X_test , y_train , y_test = train_test_split(X , y , test_size=.3)

定义添加神经层的函数

def add_layer(inputs , in_size , out_size , layer_name , activation_function=None):
    Weights = tf.Variable(tf.random.normal([in_size , out_size]))
    biases = tf.Variable(tf.zeros([1 , out_size])+0.1)
    Wx_plus_b = tf.matmul(inputs , Weights)+biases
    
    Wx_plus_b = tf.nn.dropout(Wx_plus_b , keep_prob)###Dropout主函数
    
    if activation_function is None:
        outputs = Wx_plus_b
    else:
        outputs = activation_function(Wx_plus_b)
    tf.summary.histogram(layer_name+'/outputs',outputs)#在tensorboard的history中显示outputs,最少要有一个historgram
    return outputs

定义用于输入的placeholder

# define placeholder for inputs to network
keep_prob = tf.placeholder(tf.float32)######Dropout所需,保证有一定的数据不能舍去

xs = tf.placeholder(tf.float32 , [None ,64])#8*8=64
ys = tf.placeholder(tf.float32 , [None ,10])
# add output layer
l1 = add_layer(xs , 64 , 50 , 'l1' , activation_function=tf.nn.tanh)#输出100个是为了看出过拟合
prediction = add_layer(l1 , 50 , 10 , 'l2' , activation_function=tf.nn.softmax)
WARNING: Logging before flag parsing goes to stderr.
W0914 22:00:45.831223 12724 deprecation.py:506] From <ipython-input-3-011b672dfbdb>:6: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.
Instructions for updating:
Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.

定义损失率cross_entropy(即loss)

# the loss between prediction and real data
cross_entropy = tf.reduce_mean(-tf.reduce_sum(ys*tf.log(prediction) , reduction_indices=[1]))#loss
tf.summary.scalar('loss' , cross_entropy)#在tensorboard的scalar中显示loss/cross_entropy
<tf.Tensor 'loss:0' shape=() dtype=string>

训练方式

train_step = tf.train.GradientDescentOptimizer(0.6).minimize(cross_entropy)

建立会话

sess = tf.Session()

输出到tensorboard所需要的文件

# summary writer goes in here 
merged = tf.summary.merge_all()
train_writer = tf.summary.FileWriter("logs/train" , sess.graph)
test_writer = tf.summary.FileWriter("logs/test" , sess.graph)

初始化

sess.run(tf.global_variables_initializer())

训练

for i in range(500):
    sess.run(train_step , feed_dict={xs:X_train ,ys:y_train , keep_prob:0.5})###此处保证50%不被舍弃
    if i%50 == 0:
        #record loss
        train_result = sess.run(merged , feed_dict={xs:X_train , ys:y_train , keep_prob:1})###都不舍弃
        test_result = sess.run(merged ,feed_dict={xs:X_test , ys:y_test , keep_prob:1})###都不舍弃
        train_writer.add_summary(train_result , i)
        test_writer.add_summary(test_result ,i)

第一次运行结果,有一点点的过拟合,test_data的loss更高一点,此时神经元有100个
[外链图片转存失败(img-6LyYCWwK-1568515503100)(attachment:image.png)]

第二次保留所有数据
[外链图片转存失败(img-06NvPT4R-1568515503104)(attachment:image.png)]

第三次保留50%的数据
[外链图片转存失败(img-YOL9z3kb-1568515503105)(attachment:image.png)]

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值