搭建八股
四步搭建自己的神经网络
一准备数据集 提取特征 作为输入喂给神经网络 我这里直接就是用random 获取变量
二 搭建 神经网络机构,从输入到输出,先搭建计算部分也就是y=matmul(x,w) (卷积公式), 再用对话sess.ru()执行
这里面还包括 对 参数w的正则化来改善过拟合 及 对学习率的指数衰减优化 这里还不写
三大量特征数据喂给神经网络,迭代优化神经网络参数,使用的是 反向传播bp思想
四 使用训练好的模型 检测和分类 之后进行改善
第一次写比较简单 没有使用函数直接顺着写下来了.
# tf.nn.relu()
# tf.nn.sigmoid()
# tf.nn.tanh() 几种激活函数
import tensorflow as tf
import numpy as np
BATCH_SIZE = 8
SEED = 23455
X=rng.rand(32,2)
Y=[[x1+x2+(rng.rand()%10.0-0.05)] for (x1,x2) in X]
#要训练的数据
#只有一层隐藏层的两层神经网络
#现将x y_使用占位符赋值
x=tf.placeholder(tf.float32,shape=(None,2))
y_=tf.placeholder(tf.float32,shape=(None,1))
#初始化 随机参数
w1=tf.Variable(tf.random_normal([2,3],stddev=1,seed=1))
w2=tf.Variable(tf.random_normal([3,1],stddev=1,seed=1))
a=tf.matmul(x,w1)
y=tf.matmul(a,w2)
loss = tf.reduce_mean(tf.square(y-y_))
train_step = tf.train.GradientDescentOptimizer(larning_rate).minimize(loss)
with tf.Session() as sess:
init_op=tf.global_variables_initializer()
sess.run(init_op)
print("w1:\n",sess.run(w1))
print("w2:\n",sess.run(w2))
print ("\n")
#开始 实用对话执行训练神经网络3000次
step=3000
for i in range(step):
start = (i*BATCH_SIZE)%32
end = start+BATCH_SIZE
sess.run(train_step,feed_dict = {x:X[start:end],y_:Y[start:end]})
#每500次输出 loss综合 及参数 观察训练过程
if i %500 ==0:
total_loss = sess.run(loss,feed_dict={x:X,y_:Y})
print("After %d training step(s),loss on all data is %g"% (i,total_loss))
print("w1:\n",sess.run(w1))
print("w2:\n",sess.run(w2))
运行结果:
w1:
[[-0.8113182 1.4845988 0.06532937]
[-2.4427042 0.0992484 0.5912243 ]]
w2:
[[-0.8113182 ]
[ 1.4845988 ]
[ 0.06532937]]
After 0 training step(s),loss on all data is 0.114777
w1:
[[-0.6720916 1.2298336 0.0541185 ]
[-2.3713698 -0.03128349 0.5854803 ]]
w2:
[[-0.45731926]
[ 1.2211072 ]
[ 0.00213568]]
After 500 training step(s),loss on all data is 0.0992414
w1:
[[-0.5121955 1.1483153 0.02219009]
[-2.2126336 0.01886313 0.5419505 ]]
w2:
[[-0.58823925]
[ 1.0622164 ]
[ 0.04864678]]
After 1000 training step(s),loss on all data is 0.0991065
w1:
[[-4.3420959e-01 1.1919889e+00 -9.1391290e-04]
[-2.0610752e+00 4.7483924e-03 5.0598681e-01]]
w2:
[[-0.63517994]
[ 1.0441375 ]
[ 0.06181234]]
After 1500 training step(s),loss on all data is 0.0990022
w1:
[[-0.37395903 1.229884 -0.01913854]
[-1.9383953 -0.00320749 0.47656265]]
w2:
[[-0.6764951 ]
[ 1.030762 ]
[ 0.07317095]]
After 2000 training step(s),loss on all data is 0.0989239
w1:
[[-0.32592675 1.2633038 -0.03395712]
[-1.8365345 -0.00687484 0.45186615]]
w2:
[[-0.71336144]
[ 1.0203415 ]
[ 0.08316984]]
After 2500 training step(s),loss on all data is 0.0988651
w1:
[[-0.28680435 1.2930856 -0.046258 ]
[-1.7505496 -0.00744359 0.43079028]]
w2:
[[-0.7464448 ]
[ 1.0119288 ]
[ 0.09205804]]