Tensorflow学习笔记-前向传播神经网络的搭建

前向传播神经网络

  • 变量初始化
  • 用会话(with结构)实现变量初始化和计算图节点
  • 当变量中使用tf.Variables则需tf.global_variables_initializer()进行变量初始化,用tf.placeholder()占位则需要喂入所需数据
  • sess.run()
#前馈神经网络
x = tf.placeholder(tf.float32, shape=(1, 2))
#x = tf.placeholder(tf.float32, shape=(None, 2)) 可喂入多组
w1 = tf.Variable(tf.random_normal([2, 3], stddev=1, seed=1))
w2 = tf.Variable(tf.random_normal([3, 1], stddev=1, seed=1))

a = tf.matmul(x, w1)
y = tf.matmul(a, w2)

with tf.Session() as sess:
    init_op = tf.global_variables_initializer()
    sess.run(init_op)
    print("y in tf_3_1 is:\n", sess.run(y, feed_dict={x: [[0.7, 0.5]]}))

反向传播神经网络

  • 反向传播:训练模型参数。在所有参数上用梯度下降,使NN模型在训练数据上损失函数最小
  • 损失函数(loss):计算月测试和已知值得差距(一般用均方误差MSE)
    loss_mse = tf.reduce_mean(tf.square(y_ - y))
  • 反向传播训练方法:以减小loss为优化目标,有梯度下降、momentum优化器、adam优化器等优化方法
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
train_step=tf.train.MomentumOptimizer(learning_rate, momentum).minimize(loss)
train_step=tf.train.AdamOptimizer(learning_rate).minimize(loss)


  • 学习率:决定每次参数更新的幅度

学习率过大导致震荡不收敛,过小,收敛速度慢。一般选较小的值填入,如0.01、0.001

搭建神经网络八股

搭建神经网络由四步构成:准备工作、前向传播、反向传播和循环迭代
- 导入模块:生成数据模拟值
- import
- 常量定义 BATCH_SIZE(每次随机选择参与训练的样本数) seed(随机种子)X Y
- 前向传播:定义输入、参数和输出
- x =     y_ =
- w1 =     w2 =
- a =     y =
- 反向传播:定义损失函数、反向传播方法
- loss =
- train_step =
- 生成会话,训练STEPS轮

with tf.session() as sess
    Init_op=tf. global_variables_initiali
    sess_run(init_op)
    STEPS=3000
    for i in range(STEPS):
        start=
        end=
        sess.run(train_step, feed_dict:)
import tensorflow as tf
import numpy as np

BATCH_SIZE = 8
seed = 23455

rng = np.random.RandomState(seed)
X = rng.rand(32, 2)
Y = [[int(X0 + X1 < 1)] for (X0, X1) in X]

x = tf.placeholder(tf.float32, shape=(None, 2))
y_ = tf.placeholder(tf.float32, shape=(None, 1))

w1 = tf.Variable(tf.random_normal([2, 3], stddev=1, seed=1))
w2 = tf.Variable(tf.random_normal([3, 1], stddev=1, seed=1))

a = tf.matmul(x, w1)
y = tf.matmul(a, w2)

loss = tf.reduce_mean(tf.square(y - y_))
train_step = tf.train.GradientDescentOptimizer(0.001).minimize(loss)
 # train_step = tf.train.MomentumOptimizer(0.001, 0.9).minimize(loss)
 # train_step = tf.train.AdamOptimizer(0.001).minimize(loss) #三种优化策略

with tf.Session() as sess:
    init_op = tf.global_variables_initializer()
    sess.run(init_op)
    print("\n")
    print("w1:", sess.run(w1))
    print("w2:", sess.run(w2))

    steps = 3000
    for i in range(steps):
        start = (i * BATCH_SIZE) % 32
        end = start + BATCH_SIZE
        sess.run(train_step, feed_dict={x: X[start:end], y_: Y[start:end]})

        if i % 500 == 0:
            total_loss = sess.run(loss, feed_dict={x: X, y_: Y})
            print("After %d training step(s), loss on all data is %g" % (i, total_loss))

    print("\n")
    print("w1:", sess.run(w1))
    print("w2:", sess.run(w2))

随机产生 32 组生产出的零件的体积和重量,训练 3000 轮,每 500 轮输出一次损
失函数。

参考课程《人工智能实践:Tensorflow笔记》

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值