[笔记][tf]|[例子2]tensorflow学习笔记(2)

8 篇文章 0 订阅
6 篇文章 0 订阅

[笔记][tf]|[例子2]tensorflow学习笔记(2)

参考:莫烦Python: morvanzhou.github.io

往期链接

[笔记]|[tf]|[张量]|[例子1]tensorflow学习笔记(1)
[笔记]|[tf]|[tensorboard]|[例子2可视化]tensorflow学习笔记(3)
[笔记]|[tf]|[mnist]tensorflow学习笔记(4)
[笔记]|[tf]|[CNN]|[例子3]tensorflow学习笔记(5)
[笔记]|[tf]|[Saver]|[模型的保存与提取]tensorflow学习笔记(6)

tf.random_normal()

tf.random_normal(shape,mean=0.0,stddev=1.0,dtype=tf.float32,seed=None,name=None)
  • 正态分布中输出随机值
  • mean:数据类型为dtype的张量值或Python值。是正态分布的均值。
  • stddev:数据类型为dtype的张量值或Python值。是正态分布的标准差。
  • seed:一个Python整数。是随机种子。

reduction_indices

  • 在使用tf.reduce_mean,tf.reduce_sum等函数中,有一个reduction_indices参数表示函数的处理维度
  • 需要注意的一点,在很多的时候,我们看到别人的代码中并没有reduction_indices这个参数,此时该参数取默认值None,将把input_tensor降到0维,也就是一个数。
  • (即向行列方向压缩)
  • reduction_indices = 1,先向列方向压缩,再向行方向压缩成数
  • reduction_indices = 0,先向行方向压缩,再向列方向压缩成数

10

例子2

"""
Please note, this code is only for python 3+. If you are using python 2+, please modify the code accordingly.
"""
from __future__ import print_function
import tensorflow as tf
import numpy as np

def add_layer(inputs, in_size, out_size, activation_function=None):
#activation_function=None 默认为线性函数
    # add one more layer and return the output of this layer
    Weights = tf.Variable(tf.random_normal([in_size, out_size]))
#随机生成行为in_size,列为out_size的矩阵
    biases = tf.Variable(tf.zeros([1, out_size]) + 0.1)
#(一组向量)给予biases初值
    Wx_plus_b = tf.matmul(inputs, Weights) + biases
# Wx_plus_b
    if activation_function is None:
        #激活 Wx_plus_b
        outputs = Wx_plus_b
#保持线性关系(不改变 Wx_plus_b)
    else:
        outputs = activation_function(Wx_plus_b)
#relu  Wx_plus_b
    return outputs

# Make up some real data
#给出input的值x_data与y_data
x_data = np.linspace(-1,1,300)[:, np.newaxis]
noise = np.random.normal(0, 0.05, x_data.shape)
#给出噪点(使其变得更像真实数据)变为非连续数据的一组数据
y_data = np.square(x_data) - 0.5 + noise

# define placeholder for inputs to network
xs = tf.placeholder(tf.float32, [None, 1])
#行不定,列为1,可以解为列向量
ys = tf.placeholder(tf.float32, [None, 1])
# add hidden layer
l1 = add_layer(xs, 1, 10, activation_function=tf.nn.relu)
#输入层的神经元为1;隐藏层的神经元为10;输出层的神经元为1
# add output layer
prediction = add_layer(l1, 10, 1, activation_function=None)

# the error between prediction and real data
loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys - prediction),
                     reduction_indices=[1]))
#每个例子的平方和再求平均值,最后把矩阵向列方向压缩成向量再向行方向压缩成数
train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss)
#学习率为0.1的梯度下降法减小loss的误差

# important step
# tf.initialize_all_variables() no long valid from
# 2017-03-02 if using tensorflow >= 0.12
if int((tf.__version__).split('.')[1]) < 12:
    init = tf.initialize_all_variables()
else:
    init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)

for i in range(1000):
    #迭代1000次
    # training
    sess.run(train_step, feed_dict={xs: x_data, ys: y_data})
    if i % 50 == 0:
        # to see the step improvement
        print(sess.run(loss, feed_dict={xs: x_data, ys: y_data})) 
        
        
'''
output	>>>
 			0.46071675
            0.03524734
            0.009706424
            0.0076882155
            0.007399298
            0.0071580904
            0.0068638553
            0.0064982134
            0.006088353
            0.0057029123
            0.005357201
            0.0050192624
            0.004711616
            0.004444493
            0.0041974466
            0.003994227
            0.00381248
            0.0036691495
            0.0035579773
            0.0034664767
'''

例子2的可视化

"""
Please note, this code is only for python 3+. If you are using python 2+, please modify the code accordingly.
"""
from __future__ import print_function
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt

def add_layer(inputs, in_size, out_size, activation_function=None):
    Weights = tf.Variable(tf.random_normal([in_size, out_size]))
    biases = tf.Variable(tf.zeros([1, out_size]) + 0.1)
    Wx_plus_b = tf.matmul(inputs, Weights) + biases
    if activation_function is None:
        outputs = Wx_plus_b
    else:
        outputs = activation_function(Wx_plus_b)
    return outputs

# Make up some real data
x_data = np.linspace(-1, 1, 300)[:, np.newaxis]
noise = np.random.normal(0, 0.05, x_data.shape)
y_data = np.square(x_data) - 0.5 + noise

##plt.scatter(x_data, y_data)
##plt.show()

# define placeholder for inputs to network
xs = tf.placeholder(tf.float32, [None, 1])
ys = tf.placeholder(tf.float32, [None, 1])
# add hidden layer
l1 = add_layer(xs, 1, 10, activation_function=tf.nn.relu)
# add output layer
prediction = add_layer(l1, 10, 1, activation_function=None)

# the error between prediction and real data
loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys-prediction), reduction_indices=[1]))
train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss)
# important step
sess = tf.Session()
# tf.initialize_all_variables() no long valid from
# 2017-03-02 if using tensorflow >= 0.12
if int((tf.__version__).split('.')[1]) < 12 and int((tf.__version__).split('.')[0]) < 1:
    init = tf.initialize_all_variables()
else:
    init = tf.global_variables_initializer()
sess.run(init)

# plot the real data
fig = plt.figure(num = "display", figsize=(8,6))
#生成名为“display”,宽为8英寸,高为6英寸的图像
ax = fig.add_subplot(111)
#将画布分割成1行1列,图像画在从左到右从上到下的第1块
ax.scatter(x_data, y_data)
#绘制x_data与x_data的散点图
plt.ion()
#使plt.show()完之后不暂停,继续跑主程序
plt.show()
#plt一次,然后暂停图像


for i in range(1000):
    # training
    sess.run(train_step, feed_dict={xs: x_data, ys: y_data})
    if i % 50 == 0:
        # to visualize the result and improvement
        try:
            ax.lines.remove(lines[0])
        except Exception:
            pass
        prediction_value = sess.run(prediction, feed_dict={xs: x_data})
        # plot the prediction
        lines = ax.plot(x_data, prediction_value, 'r-', lw=5)
        plt.pause(1)
        #图像暂停1s
plt.pause(0)
#运行完后暂停图像
  • output >>>

11

pycharm解决无法显示上图红线

  • File -> Settings -> Tools -> Python Scientific -> 取消勾选Show plots in tool windows选项即可
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值