Tensorflow入门(谭秉峰)(五)使用单层神经网络训练非线性回归模型

#使用numpy随机生成200随机点
x=np.linspace(-0.5,0.5,200)

'''
np.newaxis的作用就是在这一位置增加一个一维,这一位置指的是np.newaxis所在的位置,比较抽象,需要配合例子理解
x1 = np.array([1, 2, 3, 4, 5])
# the shape of x1 is (5,)
x1_new = x1[:, np.newaxis]
# now, the shape of x1_new is (5, 1)
# array([[1],
#        [2],
#        [3],
#        [4],
#        [5]])
x1_new = x1[np.newaxis,:]
# now, the shape of x1_new is (1, 5)
# array([[1, 2, 3, 4, 5]])
'''
x_data=x[:, np.newaxis]
noise=np.random.normal(0,0.02,x_data.shape)
y_data=np.square(x_data)+noise

#定义两个placeholder
x=tf.placeholder(tf.float32,[None,1])
y=tf.placeholder(tf.float32,[None,1])

#定义神经网络中间层[1,10]是神经元的个数,10个神经元
Weights_L1=tf.Variable(tf.random_normal([1,10]))
biases_L1=tf.Variable(tf.zeros([1,10]))
Wx_plus_b_L1=tf.matmul(x,Weights_L1)+biases_L1
L1=tf.nn.tanh(Wx_plus_b_L1)

#定义神经网络输出层
Weights_L2=tf.Variable(tf.random_normal([10,1]))
#最后一个为输出神经元,所以维数为1
biases_L2=tf.Variable(tf.zeros([1,1]))
Wx_plus_b_L2=tf.matmul(L1,Weights_L2)+biases_L2
prediction=tf.nn.tanh(Wx_plus_b_L2)

#二次代价函数
loss=tf.reduce_mean(tf.square(prediction-y_data))
#定义梯度下降法进行训练的优化器
optimizer=tf.train.GradientDescentOptimizer(0.1)
#最小化代价函数
train_step=optimizer.minimize(loss)


with tf.Session() as sess:
    #变量初始化
    sess.run(tf.global_variables_initializer())
    #打印在训练之前神经网络的参数值
    print("训练之前的权重W1:",sess.run(Weights_L1))
    print("训练之前的权重W2:",sess.run(Weights_L2))

    for step in range(2001):
        #进行迭代训练
        #Fetch操作
        sess.run(train_step,feed_dict={x:x_data,y:y_data})
        #每10次打印一次损失函数
        if step%10==0:
            total_loss=sess.run(loss,feed_dict={x:x_data,y:y_data})
            print('After %d training steps,loss on all data is %f' % (step,total_loss))
    #获得预测值
    prediction_value=sess.run(prediction,feed_dict={x:x_data})
    #画图
    plt.figure()
    #绘制x_data,y_data的三点图
    plt.scatter(x_data,y_data)
    #绘制拟合线
    '''
    'r-'为红色实线
    '''
    plt.plot(x_data,prediction_value,'r-',lw=5)
    plt.show()

输出结果: 

训练之前的权重W1: [[-0.08572705  0.6966633  -1.1612353  -0.7290459   0.29184964 -0.85558146
   1.2591429   0.461367   -0.8520372  -0.3425133 ]]
训练之前的权重W2: [[ 0.67222   ]
 [-0.24824722]
 [ 1.634524  ]
 [ 0.72611713]
 [ 0.8183997 ]
 [ 0.6919925 ]
 [-1.2388937 ]
 [ 0.48964742]
 [-0.032876  ]
 [-1.4493022 ]]
After 0 training steps,loss on all data is 0.475454
After 10 training steps,loss on all data is 0.173107
After 20 training steps,loss on all data is 0.012129
After 30 training steps,loss on all data is 0.006160
After 40 training steps,loss on all data is 0.005995
After 50 training steps,loss on all data is 0.005926
After 60 training steps,loss on all data is 0.005860
After 70 training steps,loss on all data is 0.005795
After 80 training steps,loss on all data is 0.005730
After 90 training steps,loss on all data is 0.005666
After 100 training steps,loss on all data is 0.005603
After 110 training steps,loss on all data is 0.005540
After 120 training steps,loss on all data is 0.005478
After 130 training steps,loss on all data is 0.005417
After 140 training steps,loss on all data is 0.005356
After 150 training steps,loss on all data is 0.005295
After 160 training steps,loss on all data is 0.005235
After 170 training steps,loss on all data is 0.005176
After 180 training steps,loss on all data is 0.005117
After 190 training steps,loss on all data is 0.005058
After 200 training steps,loss on all data is 0.005000
After 210 training steps,loss on all data is 0.004942
After 220 training steps,loss on all data is 0.004884
After 230 training steps,loss on all data is 0.004827
After 240 training steps,loss on all data is 0.004770
After 250 training steps,loss on all data is 0.004714
After 260 training steps,loss on all data is 0.004657
After 270 training steps,loss on all data is 0.004602
After 280 training steps,loss on all data is 0.004546
After 290 training steps,loss on all data is 0.004491
After 300 training steps,loss on all data is 0.004436
After 310 training steps,loss on all data is 0.004381
After 320 training steps,loss on all data is 0.004326
After 330 training steps,loss on all data is 0.004272
After 340 training steps,loss on all data is 0.004218
After 350 training steps,loss on all data is 0.004165
After 360 training steps,loss on all data is 0.004111
After 370 training steps,loss on all data is 0.004058
After 380 training steps,loss on all data is 0.004005
After 390 training steps,loss on all data is 0.003952
After 400 training steps,loss on all data is 0.003900
After 410 training steps,loss on all data is 0.003847
After 420 training steps,loss on all data is 0.003795
After 430 training steps,loss on all data is 0.003744
After 440 training steps,loss on all data is 0.003692
After 450 training steps,loss on all data is 0.003641
After 460 training steps,loss on all data is 0.003590
After 470 training steps,loss on all data is 0.003539
After 480 training steps,loss on all data is 0.003488
After 490 training steps,loss on all data is 0.003438
After 500 training steps,loss on all data is 0.003388
After 510 training steps,loss on all data is 0.003338
After 520 training steps,loss on all data is 0.003289
After 530 training steps,loss on all data is 0.003239
After 540 training steps,loss on all data is 0.003190
After 550 training steps,loss on all data is 0.003142
After 560 training steps,loss on all data is 0.003093
After 570 training steps,loss on all data is 0.003045
After 580 training steps,loss on all data is 0.002997
After 590 training steps,loss on all data is 0.002950
After 600 training steps,loss on all data is 0.002903
After 610 training steps,loss on all data is 0.002856
After 620 training steps,loss on all data is 0.002809
After 630 training steps,loss on all data is 0.002763
After 640 training steps,loss on all data is 0.002717
After 650 training steps,loss on all data is 0.002672
After 660 training steps,loss on all data is 0.002627
After 670 training steps,loss on all data is 0.002582
After 680 training steps,loss on all data is 0.002538
After 690 training steps,loss on all data is 0.002494
After 700 training steps,loss on all data is 0.002451
After 710 training steps,loss on all data is 0.002407
After 720 training steps,loss on all data is 0.002365
After 730 training steps,loss on all data is 0.002323
After 740 training steps,loss on all data is 0.002281
After 750 training steps,loss on all data is 0.002239
After 760 training steps,loss on all data is 0.002198
After 770 training steps,loss on all data is 0.002158
After 780 training steps,loss on all data is 0.002118
After 790 training steps,loss on all data is 0.002079
After 800 training steps,loss on all data is 0.002040
After 810 training steps,loss on all data is 0.002001
After 820 training steps,loss on all data is 0.001963
After 830 training steps,loss on all data is 0.001926
After 840 training steps,loss on all data is 0.001889
After 850 training steps,loss on all data is 0.001852
After 860 training steps,loss on all data is 0.001816
After 870 training steps,loss on all data is 0.001781
After 880 training steps,loss on all data is 0.001746
After 890 training steps,loss on all data is 0.001712
After 900 training steps,loss on all data is 0.001678
After 910 training steps,loss on all data is 0.001645
After 920 training steps,loss on all data is 0.001612
After 930 training steps,loss on all data is 0.001580
After 940 training steps,loss on all data is 0.001548
After 950 training steps,loss on all data is 0.001517
After 960 training steps,loss on all data is 0.001487
After 970 training steps,loss on all data is 0.001457
After 980 training steps,loss on all data is 0.001427
After 990 training steps,loss on all data is 0.001398
After 1000 training steps,loss on all data is 0.001370
After 1010 training steps,loss on all data is 0.001342
After 1020 training steps,loss on all data is 0.001315
After 1030 training steps,loss on all data is 0.001289
After 1040 training steps,loss on all data is 0.001263
After 1050 training steps,loss on all data is 0.001237
After 1060 training steps,loss on all data is 0.001212
After 1070 training steps,loss on all data is 0.001188
After 1080 training steps,loss on all data is 0.001164
After 1090 training steps,loss on all data is 0.001141
After 1100 training steps,loss on all data is 0.001118
After 1110 training steps,loss on all data is 0.001096
After 1120 training steps,loss on all data is 0.001074
After 1130 training steps,loss on all data is 0.001053
After 1140 training steps,loss on all data is 0.001033
After 1150 training steps,loss on all data is 0.001012
After 1160 training steps,loss on all data is 0.000993
After 1170 training steps,loss on all data is 0.000974
After 1180 training steps,loss on all data is 0.000955
After 1190 training steps,loss on all data is 0.000937
After 1200 training steps,loss on all data is 0.000919
After 1210 training steps,loss on all data is 0.000902
After 1220 training steps,loss on all data is 0.000885
After 1230 training steps,loss on all data is 0.000869
After 1240 training steps,loss on all data is 0.000853
After 1250 training steps,loss on all data is 0.000838
After 1260 training steps,loss on all data is 0.000823
After 1270 training steps,loss on all data is 0.000809
After 1280 training steps,loss on all data is 0.000794
After 1290 training steps,loss on all data is 0.000781
After 1300 training steps,loss on all data is 0.000768
After 1310 training steps,loss on all data is 0.000755
After 1320 training steps,loss on all data is 0.000742
After 1330 training steps,loss on all data is 0.000730
After 1340 training steps,loss on all data is 0.000718
After 1350 training steps,loss on all data is 0.000707
After 1360 training steps,loss on all data is 0.000696
After 1370 training steps,loss on all data is 0.000685
After 1380 training steps,loss on all data is 0.000675
After 1390 training steps,loss on all data is 0.000665
After 1400 training steps,loss on all data is 0.000655
After 1410 training steps,loss on all data is 0.000646
After 1420 training steps,loss on all data is 0.000637
After 1430 training steps,loss on all data is 0.000628
After 1440 training steps,loss on all data is 0.000620
After 1450 training steps,loss on all data is 0.000612
After 1460 training steps,loss on all data is 0.000604
After 1470 training steps,loss on all data is 0.000596
After 1480 training steps,loss on all data is 0.000589
After 1490 training steps,loss on all data is 0.000582
After 1500 training steps,loss on all data is 0.000575
After 1510 training steps,loss on all data is 0.000568
After 1520 training steps,loss on all data is 0.000562
After 1530 training steps,loss on all data is 0.000556
After 1540 training steps,loss on all data is 0.000550
After 1550 training steps,loss on all data is 0.000544
After 1560 training steps,loss on all data is 0.000539
After 1570 training steps,loss on all data is 0.000533
After 1580 training steps,loss on all data is 0.000528
After 1590 training steps,loss on all data is 0.000523
After 1600 training steps,loss on all data is 0.000519
After 1610 training steps,loss on all data is 0.000514
After 1620 training steps,loss on all data is 0.000510
After 1630 training steps,loss on all data is 0.000505
After 1640 training steps,loss on all data is 0.000501
After 1650 training steps,loss on all data is 0.000497
After 1660 training steps,loss on all data is 0.000494
After 1670 training steps,loss on all data is 0.000490
After 1680 training steps,loss on all data is 0.000486
After 1690 training steps,loss on all data is 0.000483
After 1700 training steps,loss on all data is 0.000480
After 1710 training steps,loss on all data is 0.000477
After 1720 training steps,loss on all data is 0.000474
After 1730 training steps,loss on all data is 0.000471
After 1740 training steps,loss on all data is 0.000468
After 1750 training steps,loss on all data is 0.000466
After 1760 training steps,loss on all data is 0.000463
After 1770 training steps,loss on all data is 0.000461
After 1780 training steps,loss on all data is 0.000458
After 1790 training steps,loss on all data is 0.000456
After 1800 training steps,loss on all data is 0.000454
After 1810 training steps,loss on all data is 0.000452
After 1820 training steps,loss on all data is 0.000450
After 1830 training steps,loss on all data is 0.000448
After 1840 training steps,loss on all data is 0.000446
After 1850 training steps,loss on all data is 0.000444
After 1860 training steps,loss on all data is 0.000442
After 1870 training steps,loss on all data is 0.000441
After 1880 training steps,loss on all data is 0.000439
After 1890 training steps,loss on all data is 0.000438
After 1900 training steps,loss on all data is 0.000436
After 1910 training steps,loss on all data is 0.000435
After 1920 training steps,loss on all data is 0.000434
After 1930 training steps,loss on all data is 0.000432
After 1940 training steps,loss on all data is 0.000431
After 1950 training steps,loss on all data is 0.000430
After 1960 training steps,loss on all data is 0.000429
After 1970 training steps,loss on all data is 0.000428
After 1980 training steps,loss on all data is 0.000427
After 1990 training steps,loss on all data is 0.000426
After 2000 training steps,loss on all data is 0.000425

 

 

 

 

  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值