Tensorflow 笔记3.3

声明:内容来自MOOC北京大学,人工智能实践基础:Tensorflow笔记
一.概念介绍
反向传播:训练模型参数,在所有参数上用梯度下降法下降,使NN模型在训练数据上的损失函数最小。

损失函数(loss):预测值(y)与已知答案(y_)的差距

均方误差MSE: MSE(y_,y)=
在这里插入图片描述
函数实现:loss=tf.reduce_mean(tf.square(y-_y))

反向传播训练方法:以减小loss值为优化目标
下面为三种优化方法,选一种使用就可
train_step = tf.train.GradientDescentOptimizer(0.001).minimize(loss)
#train_step = tf.train.MomentumOptimizer(0.001,0.9).minimize(loss)
#train_step = tf.train.AdamOptimizer(0.001).minimize(loss)

学习率:决定参数每次更新的幅度

二.神经网络的实现过程:
1.准备数据集,提取特征,作为输入喂给神经网络(Neutral Network. NN)

2.搭建NN结构,从输入到输出(先搭建计算图,在用会话执行)
NN前向传播算法–>计算输出

3.大量特征数据喂给NN,迭代优化NN参数
NN反向传播算法–>优化参数训练模型

4.使用训练好的模型预测和分类

三.源代码

#coding:utf-8
#0导入模块,生成模拟数据
import tensorflow as tf
#numpy为科学计算包
import numpy as np
#一次喂入神经网络多少组数据
BATCH_SIZE = 8
seed = 23455

#基于seed产生随机数
rng = np.random.RandomState(seed)
#随机数返回32行2列的矩阵,表示32组体积和重量,作为输入数据集
X = rng.rand(32,2)
#从X这32行2列的矩阵中取出一行判断如果和小于1给Y赋值1,else赋值0
#作为输入数据集的标签(正确答案)
Y = [[int(x0 + x1 < 1)] for (x0, x1) in X]
print("X :\n",X)
print("Y:\n",Y)

#1定义神经网络的输入,参数和输出,定义前向传播过程
x = tf.placeholder(tf.float32, shape=(None, 2))
y_= tf.placeholder(tf.float32, shape=(None, 1))

w1= tf.Variable(tf.random_normal([2,3], stddev=1, seed=1))
w2= tf.Variable(tf.random_normal([3,1], stddev=1, seed=1))

a = tf.matmul(x, w1)
y = tf.matmul(a, w2)

#定义损失函数及反向传播方法
#均方误差
loss = tf.reduce_mean(tf.square(y-y_))
#选择梯度下降实现训练过程,下面为三种优化方法,学习率0.001
train_step = tf.train.GradientDescentOptimizer(0.001).minimize(loss)
#train_step = tf.train.MomentumOptimizer(0.001,0.9).minimize(loss)
#train_step = tf.train.AdamOptimizer(0.001).minimize(loss)

#3生成会话,训练STEP轮
with tf.Session() as sess:
	init_op = tf.global_variables_initializer()
	sess.run(init_op)
	#输入目前(未经训练的)参数取值
	print("w1:\n",sess.run(w1))
	print("w2:\n",sess.run(w2))
	print("\n")

	#训练模型
	STEPS = 3000
	for i in range(STEPS):
		start = (i*BATCH_SIZE) % 32
		end = start + BATCH_SIZE
		sess.run(train_step, feed_dict={x: X[start:end], y_: Y[start:end]})
		if i % 500 == 0:
			total_loss = sess.run(loss, feed_dict={x: X, y_: Y})
			print("After %d training step(s), loss on all data is %g" % (i,total_loss))

		#输出训练后的参数取值
		print("\n")
		print("w1:\n", sess.run(w1))
		print("w2:\n", sess.run(w2))

四。结果

X :
 [[0.83494319 0.11482951]
 [0.66899751 0.46594987]
 [0.60181666 0.58838408]
 [0.31836656 0.20502072]
 [0.87043944 0.02679395]
 [0.41539811 0.43938369]
 [0.68635684 0.24833404]
 [0.97315228 0.68541849]
 [0.03081617 0.89479913]
 [0.24665715 0.28584862]
 [0.31375667 0.47718349]
 [0.56689254 0.77079148]
 [0.7321604  0.35828963]
 [0.15724842 0.94294584]
 [0.34933722 0.84634483]
 [0.50304053 0.81299619]
 [0.23869886 0.9895604 ]
 [0.4636501  0.32531094]
 [0.36510487 0.97365522]
 [0.73350238 0.83833013]
 [0.61810158 0.12580353]
 [0.59274817 0.18779828]
 [0.87150299 0.34679501]
 [0.25883219 0.50002932]
 [0.75690948 0.83429824]
 [0.29316649 0.05646578]
 [0.10409134 0.88235166]
 [0.06727785 0.57784761]
 [0.38492705 0.48384792]
 [0.69234428 0.19687348]
 [0.42783492 0.73416985]
 [0.09696069 0.04883936]]
Y:
 [[1], [0], [0], [1], [1], [1], [1], [0], [1], [1], [1], [0], [0], [0], [0], [0], [0], [1], [0], [0], [1], [1], [0], [1], [0], [1], [1], [1], [1], [1], [0], [1]]
w1:
 [[-0.8113182   1.4845988   0.06532937]
 [-2.4427042   0.0992484   0.5912243 ]]
w2:
 [[-0.8113182 ]
 [ 1.4845988 ]
 [ 0.06532937]]


After 0 training step(s), loss on all data is 5.13118
After 500 training step(s), loss on all data is 0.429111
After 1000 training step(s), loss on all data is 0.409789
After 1500 training step(s), loss on all data is 0.399923
After 2000 training step(s), loss on all data is 0.394146
After 2500 training step(s), loss on all data is 0.390597
//最后一组w1,w2
w1:
 [[-0.7000663   0.9136318   0.08953571]
 [-2.3402493  -0.14641267  0.58823055]]
w2:
 [[-0.06024267]
 [ 0.91956186]
 [-0.0682071 ]]
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值