Tensorfow神经网络搭建

在B站听了莫烦的Tensorflow神经网络python课,深入浅出,对小白非常友好

由一个小例子感受Tensorflow

import tensorflow as tf
import numpy as np

#create data
x_data = np.random.rand(100).astype(np.float32)
y_data = x_data*0.1 +0.3

###create tensoflow structure start###
Weights = tf.Variable(tf.random_uniform([1],-0.1,0.1))
biases = tf.Variable(tf.zeros([1]))

y = Weights*x_data + biases

loss = tf.reduce_mean(tf.square(y-y_data))
optimizer = tf.train.GradientDescentOptimizer(0.5)
train = optimizer.minimize(loss)

init = tf.initialize_all_variables()
###create tensoflow structure end###

sess = tf.Session()
sess.run(init)

for step in range(201):
    sess.run(train)
    if step%20 == 0:
        print(step,sess.run(Weights),sess.run(biases))

numpy.random.randn(d0, d1, …, dn)是从标准正态分布中返回一个或多个样本值。
numpy.random.rand(d0, d1, …, dn)的随机样本位于[0, 1)之间

Tensorflow 能处理的数据类型大部分是float32

Session会话控制

import tensorflow as tf

matrix1 = tf.constant([[3,3]])
matrix2 = tf.constant([[2],[2]])
product = tf.matmul(matrix1,matrix2)

###method 1
sess = tf.Session()
result = sess.run(product)
print(result)
sess.close()
import tensorflow as tf

matrix1 = tf.constant([[3,3]])
matrix2 = tf.constant([[2],[2]])
product = tf.matmul(matrix1,matrix2)

###method 2
with tf.Session() as sess:
    result2 = sess.run(product)
    print(result2)

Variable变量

import tensorflow as tf

state = tf.Variable(0,name = 'counter')
one = tf.constant(1)

new_value = tf.add(state,one)
update = tf.assign(state,new_value)

init = tf.initialize_all_variables()

with tf.Session() as sess:
    sess.run(init)
    for _ in range(3):
        sess.run(update)
        print(sess.run(state))

神经网络的构建

placeholder 传入值

placeholder是在执行时才传入的,先用来占位,variable是一开始就规定好的。

placeholder的好处:在不知道值的情况下提前设定运算逻辑

activation function 激励函数

必须是可微分的,这样才能保证在误差反向传递时误差可以传递过去

在这里插入图片描述

神经层只有两三层时,激励函数可以随意选择,但当层数增多时,激励函数的选择要慎重
在这里插入图片描述

激励函数的选择
在这里插入图片描述

在这里插入图片描述
添加层 def add_layer()

def add_layer(inputs,in_size,out_size,activation_function=None):
    Weights = tf.Variable(tf.random_normal([in_size,out_size]))
    biases = tf.Variable(tf.zeros([1,out_size]) + 0.1)
    Wx_plus_b = tf.matmul(inputs,Weights) + biases
    if activation_funcition is None:
        outputs = Wx_plus_b
    else:
        outputs = activation_funcition(Wx_plus_b)
    return outputs

tf.random_normal()函数用于从服从指定正态分布的数值中取出指定个数的值

构造神经网络

import tensorflow as tf
import numpy as np

def add_layer(inputs,in_size,out_size,activation_function=None):
    Weights = tf.Variable(tf.random_normal([in_size,out_size]))
    biases = tf.Variable(tf.zeros([1,out_size]) + 0.1)
    Wx_plus_b = tf.matmul(inputs,Weights) + biases
    if activation_function is None:
        outputs = Wx_plus_b
    else:
        outputs = activation_function(Wx_plus_b)
    return outputs

x_data = np.linspace(-1,1,300)[:,np.newaxis]
noise = np.random.normal(0,0.05,x_data.shape)
y_data = np.square(x_data) - 0.5 +noise

xs = tf.placeholder(tf.float32,[None,1]) #[None,1]为行没有要求,列为1,可以理解为列向量
ys = tf.placeholder(tf.float32,[None,1])

l1 = add_layer(xs,1,10,activation_function=tf.nn.relu)
prediction = add_layer(l1,10,1,activation_function=None)

loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys-prediction),
                     reduction_indices=[1]))
train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss)

init = tf.initialize_all_variables()

sess = tf.Session()
sess.run(init)

for i in range(1000):
    sess.run(train_step,feed_dict={xs:x_data,ys:y_data})
    if i%50==0:
        print(sess.run(loss,feed_dict={xs:x_data,ys:y_data}))

numpy.linspace(start, stop, num=50, endpoint=True, retstep=False, dtype=None)
start - 起始点,
stop - 结束点
num - 元素个数,默认为50,

np.newaxis
https://blog.csdn.net/molu_chase/article/details/78619731
这篇文章将有助于理解

如何对结果进行可视化

import tensorflow as tf
import numpy as np

import matplotlib.pyplot as plt
def add_layer(inputs,in_size,out_size,activation_function=None):
    Weights = tf.Variable(tf.random_normal([in_size,out_size]))
    biases = tf.Variable(tf.zeros([1,out_size]) + 0.1)
    Wx_plus_b = tf.matmul(inputs,Weights) + biases
    if activation_function is None:
        outputs = Wx_plus_b
    else:
        outputs = activation_function(Wx_plus_b)
    return outputs

x_data = np.linspace(-1,1,300)[:,np.newaxis]
noise = np.random.normal(0,0.05,x_data.shape)
y_data = np.square(x_data) - 0.5 +noise

xs = tf.placeholder(tf.float32,[None,1]) #[None,1]为行没有要求,列为1,可以理解为列向量
ys = tf.placeholder(tf.float32,[None,1])

l1 = add_layer(xs,1,10,activation_function=tf.nn.relu)
prediction = add_layer(l1,10,1,activation_function=None)

loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys-prediction),
                     reduction_indices=[1]))
train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss)

init = tf.initialize_all_variables()

sess = tf.Session()
sess.run(init)
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.scatter(x_data,y_data)
plt.ion()
plt.show()
for i in range(1000):
    sess.run(train_step,feed_dict={xs:x_data,ys:y_data})
    if i%50==0:
        print(sess.run(loss,feed_dict={xs:x_data,ys:y_data}))
        try:
            ax.lines.remove(lines[0])
        except Exception:
            pass
        prediction_value = sess.run(prediction,feed_dict={xs:x_data,ys:y_data})
        lines = ax.plot(x_data,prediction_value,'-r',lw=5)
        plt.pause(1)

有关神经网络的一些知识

优化器 Optimizer

  • 初识Optimizer

视频生动又形象,超好der
https://www.bilibili.com/video/av16001891/?p=18

  • 各种Optimizer

文字很难表述,再一次,这个视频超好der~~
https://www.bilibili.com/video/av16001891/?p=19

过拟合 overfitting

  • 过拟合科普

https://www.bilibili.com/video/av16001891/?p=23

  • dropout解决过拟合

https://www.bilibili.com/video/av16001891/?p=24

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值