tesorflow学习笔记【神经网络优化】

本文介绍了如何使用TensorFlow构建神经网络,包括神经元模型、激活函数(如ReLU和Sigmoid)、网络复杂度控制、损失函数(MSE和交叉熵)、学习率调整(指数衰减)及滑动平均应用。通过实例展示了优化策略对防止过拟合的重要性。
摘要由CSDN通过智能技术生成

Tensorflow学习神经网络优化

1、神经元模型: 用数学公式表示为:
在这里插入图片描述
神经网络是以神经元为基本单元构成的。

2、激活函数:引入非线性激活函数,提高模型的表达力。常见的激活函数有relu,sigmoid, tanh
在这里插入图片描述
3、神经网络的负责度: 可以用神经网络的层数和神经网络中待优化参数的个数表示
4、神经网络的层数: 层数 = n个隐藏层+ 1个输出层
5、神经网络中待优化参数: 神经网络中所有w的参数+ 神经网络中所有b的参数

6、损失函数: 用来表示预测值与真实值之间的差距,在训练神经网络是,通过不断改变神经网络中的所有的参数,使得损失函数不断减少,从而训练出更高准去率的模型。

实例代码:

import tensorflow as tf
import numpy as np
batch_size = 8
seed = 23455

rdm = np.random.RandomState(seed)
X = rdm.rand(32,2)
Y_ = [[x1+x2+(rdm.rand()/10.0-0.05)] for (x1,x2) in X]

#定义神经网络的输入,参数和输出, 定义前向传播过程
x = tf.placeholder(tf.float32, shape = (None,2))
y_  = tf.placeholder(tf.float32, shape = (None, 1))
w1 = tf.Variable(tf.random_normal([2,1],stddev = 1, seed = 1))
y = tf.matmul(x,w1)

#定义损失函数的传播方法,
#定义损失函数使用MSE, 反向传播为梯度下降
loss_mse = tf.reduce_mean(tf.square(y_ - y))
train_step = tf.train.GradientDescentOptimizer(0.001).minimize(loss_mse)

#生成会话,开始进行训练
with tf.Session() as sess:
    init_op = tf.global_variables_initializer()
    sess.run(init_op)
    # 训练模型
    STEPS = 3000
    for i in range(STEPS):
        start = (i * batch_size) % 32
        end = (i * batch_size) % 32 + batch_size
        sess.run(train_step, feed_dict={x: X[start:end], y_: Y_[start:end]})
        if i % 500 == 0:
            total_loss = sess.run(loss_mse, feed_dict={x: X, y_: Y_})
            print("After %d training_step(s),loss  on all data is %g" % (i, total_loss))
            print(sess.run(w1))

    print("final w1 is ",sess.run(w1))

结果分析:

After 0 training_step(s),loss on all data is 0.655701
[[-0.80974597]
[ 1.4852903 ]]
After 500 training_step(s),loss on all data is 0.35731
[[-0.46074435]
[ 1.641878 ]]
After 1000 training_step(s),loss on all data is 0.232481
[[-0.21939856]
[ 1.6984766 ]]
After 1500 training_step(s),loss on all data is 0.170404
[[-0.04415595]
[ 1.7003176 ]]
After 2000 training_step(s),loss on all data is 0.133037
[[0.08942621]
[1.673328 ]]
After 2500 training_step(s),loss on all data is 0.106939
[[0.19583555]
[1.6322677 ]]
final w1 is [[0.28331503]
[1.5852976 ]]

Process finished with exit code 0

7、交叉熵: 表示两个概率分布之间的距离,交叉熵越大,两个概率分布之间的距离越远,两个概率分布的差异就会越大; 反之,两个概率分布之间的距离就会越小,两个概率分布越相似。

用tensorflow 函数表示为:

ce = tf.reduce_mean(y*tf.log(tf.clip_by_value(y,1e-12,1.0)))

8、 softmax函数: 将n个分类的n个输出变为满足一下概率分布要求的函数
在这里插入图片描述
softmax函数的应用: 在n个分类中,会有n个输出,其中yi表示第i中情况出现的可能性大小,将n个输出经过softmax函数,可得到符合概率分布的分类结果。

在tensorflow中,一般让模型的输出经过softmax函数,以获得输出分类的干了分布,在与标准的答案进行对比,求出交叉熵,得到损失函数,用如下函数实现:

ce = tf.nn.sparse_softmax_cross_entrogy_with_logits(logits=y,labels=tf.argmax(y_,1))
cem = tf.reduce_mean(ce)

9、 学习率: 表示每次参数更新的幅度大小,学习率越大,会导致待优化的参数在最小值附近波动,反之,会导致待优化的参数收敛缓慢。在训练过程中,参数更新向着损失函数梯度下降的方向进行。

在这里插入图片描述
代码实例:

import tensorflow as tf

w = tf.Variable(tf.constant(5,dtype=tf.float32))
loss = tf.square(w+1)
#定义反向传播方法
train_step = tf.train.GradientDescentOptimizer(0.2).minimize(loss)
#生成会话,开始训练
with tf.Session() as sess:
    init_op = tf.global_variables_initializer()
    sess.run(init_op)
    for i in range(40):
        sess.run(train_step)
        w_val = sess.run(w)
        loss_val = sess.run(loss)
        print("After %s steps :w is %f, loss is %f"%(i,w_val,loss_val))

结果分析:

After 0 steps :w is 2.600000, loss is 12.959999
After 1 steps :w is 1.160000, loss is 4.665599
After 2 steps :w is 0.296000, loss is 1.679616
After 3 steps :w is -0.222400, loss is 0.604662
After 4 steps :w is -0.533440, loss is 0.217678
After 5 steps :w is -0.720064, loss is 0.078364
After 6 steps :w is -0.832038, loss is 0.028211
After 7 steps :w is -0.899223, loss is 0.010156
After 8 steps :w is -0.939534, loss is 0.003656
After 9 steps :w is -0.963720, loss is 0.001316
After 10 steps :w is -0.978232, loss is 0.000474
After 11 steps :w is -0.986939, loss is 0.000171
After 12 steps :w is -0.992164, loss is 0.000061
After 13 steps :w is -0.995298, loss is 0.000022
After 14 steps :w is -0.997179, loss is 0.000008
After 15 steps :w is -0.998307, loss is 0.000003
After 16 steps :w is -0.998984, loss is 0.000001
After 17 steps :w is -0.999391, loss is 0.000000
After 18 steps :w is -0.999634, loss is 0.000000
After 19 steps :w is -0.999781, loss is 0.000000
After 20 steps :w is -0.999868, loss is 0.000000
After 21 steps :w is -0.999921, loss is 0.000000
After 22 steps :w is -0.999953, loss is 0.000000
After 23 steps :w is -0.999972, loss is 0.000000
After 24 steps :w is -0.999983, loss is 0.000000
After 25 steps :w is -0.999990, loss is 0.000000
After 26 steps :w is -0.999994, loss is 0.000000
After 27 steps :w is -0.999996, loss is 0.000000
After 28 steps :w is -0.999998, loss is 0.000000
After 29 steps :w is -0.999999, loss is 0.000000
After 30 steps :w is -0.999999, loss is 0.000000
After 31 steps :w is -1.000000, loss is 0.000000
After 32 steps :w is -1.000000, loss is 0.000000
After 33 steps :w is -1.000000, loss is 0.000000
After 34 steps :w is -1.000000, loss is 0.000000
After 35 steps :w is -1.000000, loss is 0.000000
After 36 steps :w is -1.000000, loss is 0.000000
After 37 steps :w is -1.000000, loss is 0.000000
After 38 steps :w is -1.000000, loss is 0.000000
After 39 steps :w is -1.000000, loss is 0.000000

Process finished with exit code 0

有结果可以看出,随着损失函数的不断减少,模型w的最优取值为-1,

10、 指数衰减学习率: 学习率随着训练轮数的变化而不断变化。

学习概率的计算公式如下:
在这里插入图片描述
用tensorflow 函数表示为:

global_step = tf.Variable(0,trainable=False)
learning_rate = tf.train.exponential_decy(LEARNING_RATE_BASE,global_step,LEARNING_RATE_STEP,LEARNING_RATE_DECAY,staricase = True/False)
其中,LEANING_RATE_BASE : 学习率的初始值, 
     LEANRNING_RATE_DECAY : 学习的衰减率,
     global_step: 记录了当前的训练轮数, 为不可训练型参数,
     学习率 learning_rate 的更新频率为总的样本数除以每次喂入网络的数目
     若staircase设置为True, 表示global_step 取整数,学习率呈阶梯形衰减, 若staircase设置为False,学习会是一条平滑的曲线。

实例代码:

import tensorflow as tf

LEARNING_RATE_BASE = 0.1
LEARNING_RATE_DECAY = 0.99
LEARNING_RATE_STEP = 1

global_step = tf.Variable(0,trainable=False)
learning_rate = tf.train.exponential_decay(LEARNING_RATE_BASE,global_step,LEARNING_RATE_STEP,LEARNING_RATE_DECAY,staircase=True)
#定义待优化参数,初始值为10
w = tf.Variable(tf.constant(5,dtype=tf.float32))
#定义损失函数
loss = tf.square(w+1)
#定义反向传播方法
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss,global_step=global_step)

#生成会话,开始训练
with tf.Session() as sess:
    init_op = tf.global_variables_initializer()
    sess.run(init_op)
    for i in range(40):
        sess.run(train_step)
        learning_rate_val = sess.run(learning_rate)
        global_step_val = sess.run(global_step)
        w_val = sess.run(w)
        loss_val = sess.run(loss)
        print("After %s steps: global_step is %f, w is %f, learning_rate is %f, loss is %f"%(i,global_step_val,w_val,learning_rate_val,loss_val))

结果分析:

After 0 steps: global_step is 1.000000, w is 3.800000, learning_rate is 0.099000, loss is 23.040001
After 1 steps: global_step is 2.000000, w is 2.849600, learning_rate is 0.098010, loss is 14.819419
After 2 steps: global_step is 3.000000, w is 2.095001, learning_rate is 0.097030, loss is 9.579033
After 3 steps: global_step is 4.000000, w is 1.494386, learning_rate is 0.096060, loss is 6.221961
After 4 steps: global_step is 5.000000, w is 1.015167, learning_rate is 0.095099, loss is 4.060896
After 5 steps: global_step is 6.000000, w is 0.631886, learning_rate is 0.094148, loss is 2.663051
After 6 steps: global_step is 7.000000, w is 0.324608, learning_rate is 0.093207, loss is 1.754587
After 7 steps: global_step is 8.000000, w is 0.077684, learning_rate is 0.092274, loss is 1.161403
After 8 steps: global_step is 9.000000, w is -0.121202, learning_rate is 0.091352, loss is 0.772287
After 9 steps: global_step is 10.000000, w is -0.281761, learning_rate is 0.090438, loss is 0.515867
After 10 steps: global_step is 11.000000, w is -0.411674, learning_rate is 0.089534, loss is 0.346128
After 11 steps: global_step is 12.000000, w is -0.517024, learning_rate is 0.088638, loss is 0.233266
After 12 steps: global_step is 13.000000, w is -0.602644, learning_rate is 0.087752, loss is 0.157891
After 13 steps: global_step is 14.000000, w is -0.672382, learning_rate is 0.086875, loss is 0.107334
After 14 steps: global_step is 15.000000, w is -0.729305, learning_rate is 0.086006, loss is 0.073276
After 15 steps: global_step is 16.000000, w is -0.775868, learning_rate is 0.085146, loss is 0.050235
After 16 steps: global_step is 17.000000, w is -0.814036, learning_rate is 0.084294, loss is 0.034583
After 17 steps: global_step is 18.000000, w is -0.845387, learning_rate is 0.083451, loss is 0.023905
After 18 steps: global_step is 19.000000, w is -0.871193, learning_rate is 0.082617, loss is 0.016591
After 19 steps: global_step is 20.000000, w is -0.892476, learning_rate is 0.081791, loss is 0.011561
After 20 steps: global_step is 21.000000, w is -0.910065, learning_rate is 0.080973, loss is 0.008088
After 21 steps: global_step is 22.000000, w is -0.924629, learning_rate is 0.080163, loss is 0.005681
After 22 steps: global_step is 23.000000, w is -0.936713, learning_rate is 0.079361, loss is 0.004005
After 23 steps: global_step is 24.000000, w is -0.946758, learning_rate is 0.078568, loss is 0.002835
After 24 steps: global_step is 25.000000, w is -0.955125, learning_rate is 0.077782, loss is 0.002014
After 25 steps: global_step is 26.000000, w is -0.962106, learning_rate is 0.077004, loss is 0.001436
After 26 steps: global_step is 27.000000, w is -0.967942, learning_rate is 0.076234, loss is 0.001028
After 27 steps: global_step is 28.000000, w is -0.972830, learning_rate is 0.075472, loss is 0.000738
After 28 steps: global_step is 29.000000, w is -0.976931, learning_rate is 0.074717, loss is 0.000532
After 29 steps: global_step is 30.000000, w is -0.980378, learning_rate is 0.073970, loss is 0.000385
After 30 steps: global_step is 31.000000, w is -0.983281, learning_rate is 0.073230, loss is 0.000280
After 31 steps: global_step is 32.000000, w is -0.985730, learning_rate is 0.072498, loss is 0.000204
After 32 steps: global_step is 33.000000, w is -0.987799, learning_rate is 0.071773, loss is 0.000149
After 33 steps: global_step is 34.000000, w is -0.989550, learning_rate is 0.071055, loss is 0.000109
After 34 steps: global_step is 35.000000, w is -0.991035, learning_rate is 0.070345, loss is 0.000080
After 35 steps: global_step is 36.000000, w is -0.992297, learning_rate is 0.069641, loss is 0.000059
After 36 steps: global_step is 37.000000, w is -0.993369, learning_rate is 0.068945, loss is 0.000044
After 37 steps: global_step is 38.000000, w is -0.994284, learning_rate is 0.068255, loss is 0.000033
After 38 steps: global_step is 39.000000, w is -0.995064, learning_rate is 0.067573, loss is 0.000024
After 39 steps: global_step is 40.000000, w is -0.995731, learning_rate is 0.066897, loss is 0.000018

Process finished with exit code 0

11、滑动平均值: 记录了一段时间内模型中所有w和b各自的平均值,利用滑动平均可以增强模型的泛化能力。

滑动平均值的计算公式:

影子 = 衰减率 * 影子 +(1- 衰减率) * 参数

其中衰减率等于: 在这里插入图片描述
用tensorflow函数表示:

ema = tf.train.ExpontialMovingAverage(MOVEING_AVERAGE_DECAY, global_step)

其中, MOVING_AVREAGE_DECAY ,表示的是滑动平均衰减率,一般会赋值于接近于1的值, global_step 表示当前训练了多少轮

例如:

在这里插入图片描述
代码如下:

import tensorflow as tf

w1 = tf.Variable(0,dtype= tf.float32)
global_step = tf.Variable(0,trainable=False)
MOVING_AVERAGE_DECAY = 0.99
ema = tf.train.ExponentialMovingAverage(MOVING_AVERAGE_DECAY, global_step)

ema_op = ema.apply(tf.trainable_variables())

#查看不同迭代中变量取值变化
with tf.Session() as sess:
    init_op = tf.global_variables_initializer()
    sess.run(init_op)

    print(sess.run([w1,ema.average(w1)]))

    sess.run(tf.assign(w1,1))
    sess.run(ema_op)
    print (sess.run([w1,ema.average(w1)]))
    #更新step和w1 的值,模拟出100轮迭代后,参数w1 变为10
    sess.run(tf.assign(global_step,100))
    sess.run(tf.assign(w1,10))
    sess.run(ema_op)
    print(sess.run([w1,ema.average(w1)]))

    sess.run(ema_op)
    print(sess.run([w1, ema.average(w1)]))

    sess.run(ema_op)
    print(sess.run([w1, ema.average(w1)]))

    sess.run(ema_op)
    print(sess.run([w1, ema.average(w1)]))

    sess.run(ema_op)
    print(sess.run([w1, ema.average(w1)]))
    print(sess.run([w1,ema.average(w1)]))

    sess.run(ema_op)
    print(sess.run([w1, ema.average(w1)]))

结果分析:

[0.0, 0.0]
[1.0, 0.9]
[10.0, 1.6445453]
[10.0, 2.3281732]
[10.0, 2.955868]
[10.0, 3.532206]
[10.0, 4.061389]
[10.0, 4.061389]
[10.0, 4.547275]

从运行结果可以看出,最初w1和滑动平均值都是0,随后滑动平均为0.9,当迭代轮数更新为100时,参数w1更新为10后,滑动平均值变为1.644, 随后没执行一次,参数w1的滑动平均值都向w1靠近,由此可见,滑动平均参数随着参数的变化而变化。

12、过拟合: 神经网络模型在训练数据集上的准确率高,但是在测试机上比较低,泛化能力差

13、正则化:在损失函数中给每个参数加上权重,引入模型复杂度指标,从而抑制模型噪声,减少过拟合。

在这里插入图片描述

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

浪子私房菜

给小强一点爱心呗

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值