笔记二

第三章

sess.run()

sess.run()的参数不同,代表的含义不同:1)使用神经网络计算;2)进行训练

1)使用神经网络计算

import tensorflow as tf

x1 = tf.placeholder(dtype=tf.float32)
x2 = tf.placeholder(dtype=tf.float32)
x3 = tf.placeholder(dtype=tf.float32)

w1 = tf.Variable(0.1, dtype=tf.float32)
w2 = tf.Variable(0.1, dtype=tf.float32)
w3 = tf.Variable(0.1, dtype=tf.float32)

n1 = x1 * w1
n2 = x2 * w2
n3 = x3 * w3

y = n1 + n2 + n3

sess = tf.Session()

init = tf.global_variables_initializer()
sess.run(init)
result = sess.run([x1, x2, x3, w1, w2, w3, y], feed_dict={x1: 90, x2: 80, x3: 70})
print(result)

上述代码没有进行训练,只是 运行 了一次神经网络,或者说,用神经网络进行了一次 计算

上述代码的输出:

D:\Program_Files_x64\Anaconda3\envs\deeplearning_Py_TF\python.exe D:/inst2vec_experiment/PycharmProjects/deeplearning_Py_TF/code_3.1_score1a.py
2021-01-13 13:11:35.307365: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
[array(90., dtype=float32), array(80., dtype=float32), array(70., dtype=float32), 0.1, 0.1, 0.1, 24.0]

Process finished with exit code 0

第一段输出 2021-01-13 13:11:35.307365: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2,是TensorFlow提醒我们没有充分发挥CPU的能力,我们可以忽略这条信息。

第二段输出是真正得到的print函数输出的result变量的值。
可以看到,整个结果以中括号[]括起来,说明这是一个数组类型的变量,数组中用逗号分隔开来各个数值,前三个数值分别是我们输入的3个分数90、80、70,对应x1、x2、x3变量,TensorFlow把他们认为是仅保存一个数字的数组(array),并且数值类型是float32类型(即32位浮点数),这没有关系,不影响计算。之后是3个0.1,分别对应w1、w2、w3可变参数,因给它们定义的初始值都是0.1。最后是根据这些数值计算出来的输出层的结果y,计算结果是24。可以验算一下:90*0.1+80*0.1+70*0.1=24,说明我们搭建的神经网络计算的结果是正确的。

2)进行训练

import tensorflow as tf

x1 = tf.placeholder(dtype=tf.float32)
x2 = tf.placeholder(dtype=tf.float32)
x3 = tf.placeholder(dtype=tf.float32)
yTrain = tf.placeholder(dtype=tf.float32)
w1 = tf.Variable(0.1, dtype=tf.float32)
w2 = tf.Variable(0.1, dtype=tf.float32)
w3 = tf.Variable(0.1, dtype=tf.float32)

n1 = x1 * w1
n2 = x2 * w2
n3 = x3 * w3

y = n1 + n2 + n3

loss = tf.abs(y - yTrain)

optimizer = tf.train.RMSPropOptimizer(0.001)

train = optimizer.minimize(loss)

sess = tf.Session()

init = tf.global_variables_initializer()

sess.run(init)

result = sess.run([train, x1, x2, x3, w1, w2, w3, y, yTrain, loss], feed_dict={x1: 90, x2:80, x3: 70, yTrain: 85})
print(result)

result = sess.run([train, x1, x2, x3, w1, w2, w3, y, yTrain, loss], feed_dict={x1: 98, x2:95, x3: 87, yTrain: 96})
print(result)

上述代码进行了 两次 训练

最后几行sess.run()处的代码,不同之处有两个,一个是在feed_dict参数中多指定了yTrain的数值;二是在sess.run函数的第一个参数也就是我们要求输出的结果数组当中,多加了一个train对象,在结果数组中有train对象,意味着要求程序要执行train对象所包含的训练过程,那么在这个过程中,y、loss等计算结果自然也会被计算出来;所以在结果数组中即使只写一个train,其他的结果也会都被计算出来,只不过我们看不到而已。这里我们还是在结果数组中加上yTrain和loss,以便对照。另外,只有在结果数组中加上了训练对象,这次sess.run函数的执行才能被称为一次训练,否则只是运行一次神经网络,或者说是用神经网络进行一次计算。

上述代码的输出如下:

D:\Program_Files_x64\Anaconda3\envs\deeplearning_Py_TF\python.exe D:/inst2vec_experiment/PycharmProjects/deeplearning_Py_TF/code_3.2_score1b.py
2021-01-13 13:20:07.623220: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
[None, array(90., dtype=float32), array(80., dtype=float32), array(70., dtype=float32), 0.10316052, 0.10316006, 0.103159375, 24.0, array(85., dtype=float32), 61.0]
[None, array(98., dtype=float32), array(95., dtype=float32), array(87., dtype=float32), 0.10554425, 0.10563005, 0.1056722, 28.884804, array(96., dtype=float32), 67.1152]

Process finished with exit code 0

遗留问题

#1

训练解线性方程组的问题时,我发现我所试验的代码,最终得到的数值解和解析解的差距还是挺大的。

import tensorflow as tf

x1 = tf.placeholder(dtype=tf.float32)
x2 = tf.placeholder(dtype=tf.float32)
x3 = tf.placeholder(dtype=tf.float32)
yTrain = tf.placeholder(dtype=tf.float32)
w1 = tf.Variable(0.1, dtype=tf.float32)
w2 = tf.Variable(0.1, dtype=tf.float32)
w3 = tf.Variable(0.1, dtype=tf.float32)

n1 = x1 * w1
n2 = x2 * w2
n3 = x3 * w3

y = n1 + n2 + n3

loss = tf.abs(y - yTrain)

optimizer = tf.train.RMSPropOptimizer(0.001)

train = optimizer.minimize(loss)

sess = tf.Session()

init = tf.global_variables_initializer()

sess.run(init)

flag = 0
tmp = 999

for i in range(10000):
    result = sess.run([train, x1, x2, x3, w1, w2, w3, y, yTrain, loss], feed_dict={x1: 90, x2:80, x3: 70, yTrain: 85})
    if i == tmp:
        flag = 1
        print("i = %d" % i)
        print(result)

    result = sess.run([train, x1, x2, x3, w1, w2, w3, y, yTrain, loss], feed_dict={x1: 98, x2:95, x3: 87, yTrain: 96})
    if i == tmp:
        print(result)

    result = sess.run([train, x1, x2, x3, w1, w2, w3, y, yTrain, loss], feed_dict={x1: 70, x2: 90, x3: 80, yTrain: 77})
    if i == tmp:
        print(result)

    if flag == 1:
        tmp = tmp + 1000
        flag = 0

输出如下:

i = 999
[None, array(90., dtype=float32), array(80., dtype=float32), array(70., dtype=float32), 0.5451521, 0.2391615, 0.22838268, 83.954834, array(85., dtype=float32), 1.045166]
[None, array(98., dtype=float32), array(95., dtype=float32), array(87., dtype=float32), 0.5440352, 0.2380911, 0.22728945, 96.01454, array(96., dtype=float32), 0.014541626]
[None, array(70., dtype=float32), array(90., dtype=float32), array(80., dtype=float32), 0.5432225, 0.23707846, 0.22628473, 77.69382, array(77., dtype=float32), 0.69381714]
i = 1999
[None, array(90., dtype=float32), array(80., dtype=float32), array(70., dtype=float32), 0.5975501, 0.22908793, 0.18272829, 84.66895, array(85., dtype=float32), 0.33104706]
[None, array(98., dtype=float32), array(95., dtype=float32), array(87., dtype=float32), 0.59643316, 0.22801752, 0.18163507, 96.22063, array(96., dtype=float32), 0.22062683]
[None, array(70., dtype=float32), array(90., dtype=float32), array(80., dtype=float32), 0.5972459, 0.22903016, 0.1826398, 76.802704, array(77., dtype=float32), 0.19729614]
i = 2999
[None, array(90., dtype=float32), array(80., dtype=float32), array(70., dtype=float32), 0.5966129, 0.24870305, 0.16139817, 84.6607, array(85., dtype=float32), 0.33930206]
[None, array(98., dtype=float32), array(95., dtype=float32), array(87., dtype=float32), 0.59549594, 0.24763264, 0.16030495, 96.13649, array(96., dtype=float32), 0.13648987]
[None, array(70., dtype=float32), array(90., dtype=float32), array(80., dtype=float32), 0.59630865, 0.24864528, 0.16130967, 76.79605, array(77., dtype=float32), 0.20394897]
i = 3999
[None, array(90., dtype=float32), array(80., dtype=float32), array(70., dtype=float32), 0.5956747, 0.2500022, 0.15944958, 85.000946, array(85., dtype=float32), 0.0009460449]
[None, array(98., dtype=float32), array(95., dtype=float32), array(87., dtype=float32), 0.5967916, 0.25107262, 0.1605428, 95.99844, array(96., dtype=float32), 0.0015563965]
[None, array(70., dtype=float32), array(90., dtype=float32), array(80., dtype=float32), 0.5959789, 0.25005996, 0.15953808, 77.21537, array(77., dtype=float32), 0.21537018]
i = 4999
[None, array(90., dtype=float32), array(80., dtype=float32), array(70., dtype=float32), 0.5956747, 0.2500022, 0.15944958, 85.000946, array(85., dtype=float32), 0.0009460449]
[None, array(98., dtype=float32), array(95., dtype=float32), array(87., dtype=float32), 0.5967916, 0.25107262, 0.1605428, 95.99844, array(96., dtype=float32), 0.0015563965]
[None, array(70., dtype=float32), array(90., dtype=float32), array(80., dtype=float32), 0.5959789, 0.25005996, 0.15953808, 77.21537, array(77., dtype=float32), 0.21537018]
i = 5999
[None, array(90., dtype=float32), array(80., dtype=float32), array(70., dtype=float32), 0.5956747, 0.2500022, 0.15944958, 85.000946, array(85., dtype=float32), 0.0009460449]
[None, array(98., dtype=float32), array(95., dtype=float32), array(87., dtype=float32), 0.5967916, 0.25107262, 0.1605428, 95.99844, array(96., dtype=float32), 0.0015563965]
[None, array(70., dtype=float32), array(90., dtype=float32), array(80., dtype=float32), 0.5959789, 0.25005996, 0.15953808, 77.21537, array(77., dtype=float32), 0.21537018]
i = 6999
[None, array(90., dtype=float32), array(80., dtype=float32), array(70., dtype=float32), 0.5956747, 0.2500022, 0.15944958, 85.000946, array(85., dtype=float32), 0.0009460449]
[None, array(98., dtype=float32), array(95., dtype=float32), array(87., dtype=float32), 0.5967916, 0.25107262, 0.1605428, 95.99844, array(96., dtype=float32), 0.0015563965]
[None, array(70., dtype=float32), array(90., dtype=float32), array(80., dtype=float32), 0.5959789, 0.25005996, 0.15953808, 77.21537, array(77., dtype=float32), 0.21537018]
i = 7999
[None, array(90., dtype=float32), array(80., dtype=float32), array(70., dtype=float32), 0.5956747, 0.2500022, 0.15944958, 85.000946, array(85., dtype=float32), 0.0009460449]
[None, array(98., dtype=float32), array(95., dtype=float32), array(87., dtype=float32), 0.5967916, 0.25107262, 0.1605428, 95.99844, array(96., dtype=float32), 0.0015563965]
[None, array(70., dtype=float32), array(90., dtype=float32), array(80., dtype=float32), 0.5959789, 0.25005996, 0.15953808, 77.21537, array(77., dtype=float32), 0.21537018]
i = 8999
[None, array(90., dtype=float32), array(80., dtype=float32), array(70., dtype=float32), 0.5956747, 0.2500022, 0.15944958, 85.000946, array(85., dtype=float32), 0.0009460449]
[None, array(98., dtype=float32), array(95., dtype=float32), array(87., dtype=float32), 0.5967916, 0.25107262, 0.1605428, 95.99844, array(96., dtype=float32), 0.0015563965]
[None, array(70., dtype=float32), array(90., dtype=float32), array(80., dtype=float32), 0.5959789, 0.25005996, 0.15953808, 77.21537, array(77., dtype=float32), 0.21537018]
i = 9999
[None, array(90., dtype=float32), array(80., dtype=float32), array(70., dtype=float32), 0.5956747, 0.2500022, 0.15944958, 85.000946, array(85., dtype=float32), 0.0009460449]
[None, array(98., dtype=float32), array(95., dtype=float32), array(87., dtype=float32), 0.5967916, 0.25107262, 0.1605428, 95.99844, array(96., dtype=float32), 0.0015563965]
[None, array(70., dtype=float32), array(90., dtype=float32), array(80., dtype=float32), 0.5959789, 0.25005996, 0.15953808, 77.21537, array(77., dtype=float32), 0.21537018]

Process finished with exit code 0

最终得到的数值解是:[0.596, 0.250, 0.160]
而我们的解析解是:[0.6, 0.3, 0.1]
差距还是很大的,但是我们发现数值解似乎可以满足方程,只是第三个input的loss相对第一个、第二个较大。
我们还发现,当i=3999的时候,训练的数值解已经和i=9999的时候差别不大了,因此,若继续增大训练次数,应该也很难让网络权重更接近解析解。这是为什么呢?
这个问题该如何解决呢?

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值