Neural Network 1 导读

神经网络最重要的参数

output = input * weights + bias

loss function:预测值和标准答案的差距
目的:想要找到一组参数w和b,使得损失函数最小
梯度:函数对各参数求偏导后的向量
所以梯度下降就是寻找损失函数的最小值,得到最优参数
学习率:梯度下降速度在这里插入图片描述
反向传播 : 从后向前,逐层求损失函数对每层神经元参数偏导数,迭代更新所有参数

w t + l    =    w t    −    l r ∗ ∂ l o s s ∂ w t w_{t+l}\;=\;w_t\;-\;lr\ast\frac{\partial loss}{\partial w_t} wt+l=wtlrwtloss

举个栗子:

import tensorflow as tf
import numpy as np

w = tf.Variable(tf.constant(5,dtype=tf.float32))  # the inti is 5
lr = 0.2
epoch = 40

for epoch in range(epoch):
    with tf.GradientTape() as tape:
        loss = tf.square(w + 1)
    grads = tape.gradient(loss,w)

    w.assign_sub(lr * grads)
    print("epoch %s , w is %f , loss is %f  " % (epoch,w.numpy(),loss))
epoch 0 , w is 2.600000 , loss is 36.000000  
epoch 1 , w is 1.160000 , loss is 12.959999  
epoch 2 , w is 0.296000 , loss is 4.665599  
epoch 3 , w is -0.222400 , loss is 1.679616  
epoch 4 , w is -0.533440 , loss is 0.604662  
epoch 5 , w is -0.720064 , loss is 0.217678  
epoch 6 , w is -0.832038 , loss is 0.078364  
epoch 7 , w is -0.899223 , loss is 0.028211  
epoch 8 , w is -0.939534 , loss is 0.010156  
epoch 9 , w is -0.963720 , loss is 0.003656  
epoch 10 , w is -0.978232 , loss is 0.001316  
epoch 11 , w is -0.986939 , loss is 0.000474  
epoch 12 , w is -0.992164 , loss is 0.000171  
epoch 13 , w is -0.995298 , loss is 0.000061  
epoch 14 , w is -0.997179 , loss is 0.000022  
epoch 15 , w is -0.998307 , loss is 0.000008  
epoch 16 , w is -0.998984 , loss is 0.000003  
epoch 17 , w is -0.999391 , loss is 0.000001  
epoch 18 , w is -0.999634 , loss is 0.000000  
epoch 19 , w is -0.999781 , loss is 0.000000  
epoch 20 , w is -0.999868 , loss is 0.000000  
epoch 21 , w is -0.999921 , loss is 0.000000  
epoch 22 , w is -0.999953 , loss is 0.000000  
epoch 23 , w is -0.999972 , loss is 0.000000  
epoch 24 , w is -0.999983 , loss is 0.000000  
epoch 25 , w is -0.999990 , loss is 0.000000  
epoch 26 , w is -0.999994 , loss is 0.000000  
epoch 27 , w is -0.999996 , loss is 0.000000  
epoch 28 , w is -0.999998 , loss is 0.000000  
epoch 29 , w is -0.999999 , loss is 0.000000  
epoch 30 , w is -0.999999 , loss is 0.000000  
epoch 31 , w is -1.000000 , loss is 0.000000  
epoch 32 , w is -1.000000 , loss is 0.000000  
epoch 33 , w is -1.000000 , loss is 0.000000  
epoch 34 , w is -1.000000 , loss is 0.000000  
epoch 35 , w is -1.000000 , loss is 0.000000  
epoch 36 , w is -1.000000 , loss is 0.000000  
epoch 37 , w is -1.000000 , loss is 0.000000  
epoch 38 , w is -1.000000 , loss is 0.000000  
epoch 39 , w is -1.000000 , loss is 0.000000  
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值