Tensorflow入门note:搭建神经网络(谁看谁会)

Tensorflow入门note:搭建神经网络

一、我所理解的神经网络

众所周知,我们的大脑里有成千上万个神经元,神经元之间通过突触传递信息。而一个个神经元就是搭建神经网络的基础,而传递的信息,就是我们的关键点。在这里插入图片描述

如果神经元是搭建神经网络的一个个砖块,那么多个神经元就会组成一个神经网络。
在这里插入图片描述

二、回到现实,那么我们所讨论的神经网络是啥呢?

1、神经网络的认识:

首先,是输入层,即所有数据集的入口,训练数据的大门,通过输入层进入下一层神经元,隐藏层。
用到的知识可能是数据清洗,数据处理,pandas ,numpy,matplotlib

隐藏层,数据处理的加工厂,就像我们的神经元,把钠钾离子处理成电信号在进行传输的过程。
隐藏层所要处理的事情有很多,首先数据进入隐藏层,我们要对其乘以权重(weights),然后加上偏置(bias),即我们熟悉的线性关系(权重和偏置,就像我们神经元里的神经递质,被传来传去)。然后通过激活函数(activation function),例如“sigmoid,relu”等,进行激活神经阈值。这就是一个隐藏层神经元处理的事情,多个神经元就是线性代数所解决的问题。(现阶段入门,理解的神经网络既是线性处理网络)

这里会提到损失(loss),即评价这个神经网络好坏,我们在训练一个神经网络时,会用原始数据来计算和训练,而计算机所算得的值,与原始的差值当然是越小越好。涉及到MSE,随机梯度,adam等来优化所训练的神经网络。更好的网络 = 最小的损失
在这里插入图片描述
用到的知识有线性代数,梯度下降,误差逆传播算法(BP)

输出层即输出我们所需要的预测值。

2、神经网络用来做什么?
解决两大问题:1)分类 2)回归
分类可以做识别,回归可以做预测。

浅谈至此,深究还得不断学习和优化。

三、tensorflow2.0的神经网络搭建

首先我们先看一段代码:

#keras基于tensorflow的线性回归

import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf

#使用numpy生成100个随机点
x_data=np.random.rand(100)
noise=np.random.normal(0,0.01,x_data.shape)#生成和x_data形状一样的噪点
y_data=x_data*0.1+0.2+noise

#构建一个顺序模型,模型内关系是一对一
model=tf.keras.Sequential()#为什么?干什么的
#在模型中添加一个全连接层
model.add(tf.keras.layers.Dense(units=1,input_shape=(1,)))#units是输出维度,输出y,/ input_dim是输入维度,输入x,Dense表示是一个线形
model.compile(optimizer='sgd',loss='mse')#编译这个模型,sgd是随机梯度下降法,优化器.mse是均方误差

#训练模型
for step in range(3001):
    #每次训练一个批次
    cost=model.train_on_batch(x_data,y_data)#代价函数的值,其实就是loss
    #每500个batch打印一次cost值
    if step %500==0:
        print('cost:',cost)

#打印权值和偏置值
W,b=model.layers[0].get_weights()#线性回归,只有一层
print('W:',W,'b:',b)

#x_data输入网络中,得到预测值y_pred
y_pred=model.predict(x_data)

#显示随机点s
plt.scatter(x_data,y_data)
#显示预测结果
plt.plot(x_data,y_pred,'r-',lw=3)#r-表示红色的线,lw表示线宽
plt.show()

-------------output----------------
cost: 0.07074891775846481
cost: 0.0001041813229676336
cost: 9.95131122181192e-05
cost: 9.833644435275346e-05
cost: 9.803989087231457e-05
cost: 9.796515223570168e-05
cost: 9.79462856776081e-05
W: [[0.0978568]] b: [0.2015335]

Process finished with exit code 0

在这里插入图片描述
这是一个典型的线性回归(基础原理查看我的笔记:Python机器学习:线性回归),基于tensorflow的keras,代码中可以看出训练3000次,损失值不断下降,回归越来越准确。经过多次训练,计算机已经计算出权重(weights)和偏置(bias),函数f(x) = w * x + b已成型,可以拿来预测。
这段代码的输入层为一个神经元,隐藏层为一个,传入参数为2个,即权重(weights)和偏置(bias)

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense (Dense)                (None, 1)                 2         
=================================================================
Total params: 2
Trainable params: 2
Non-trainable params: 0
_________________________________________________________________

但这段代码若是多输入,即输入神经元不止一个,那就很难应对。

再看一段代码:

import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf

file_read = pd.read_csv(r"D:\桌面\Advertising.csv")
#print(file_read)

"""
plt.scatter(file_read.TV ,file_read.sales)
plt.show()
"""

x = file_read.iloc[: ,1:-1]
y = file_read.iloc[:,-1]

#建立每层神经网络,隐藏层层数越多,预测损失值越低;
# activation是激活函数
model = tf.keras.Sequential([tf.keras.layers.Dense(20 ,input_shape=(3,) ,activation="relu"),
                             tf.keras.layers.Dense(1)])
#展示所建立的层
model.summary()

#选取优化器
#sgd——随机梯度下降,比其它算法用的时间长,而且可能会被困在鞍点
#adm——增加了动量(指数加权平均),在梯度不变的方向速度变快,梯度变化的方向速度变慢。
model.compile(optimizer="adam" ,loss="mse")
model.fit(x ,y ,epochs=100)

#测试值
text = [[230 ,37 ,62] ,[44 ,39 ,45] ,[18 ,46 ,70]]
print(model.predict(text))
"""
[[25.43827  ]
 [12.2664585]
 [ 7.06537  ]]
 10 layers
---------------
[[22.560543]
 [ 9.139952]
 [ 8.598131]]
 20 layers
"""

输出:

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense (Dense)                (None, 20)                80        
_________________________________________________________________
dense_1 (Dense)              (None, 1)                 21        
=================================================================
Total params: 101
Trainable params: 101
Non-trainable params: 0
_________________________________________________________________
Epoch 1/100
7/7 [==============================] - 0s 712us/step - loss: 1666.4137
Epoch 2/100
7/7 [==============================] - 0s 570us/step - loss: 1295.3750
Epoch 3/100
7/7 [==============================] - 0s 855us/step - loss: 984.1370
Epoch 4/100
7/7 [==============================] - 0s 712us/step - loss: 744.4187
Epoch 5/100
7/7 [==============================] - 0s 570us/step - loss: 560.3703
Epoch 6/100
7/7 [==============================] - 0s 427us/step - loss: 424.5746
Epoch 7/100
7/7 [==============================] - 0s 570us/step - loss: 338.8729
Epoch 8/100
7/7 [==============================] - 0s 570us/step - loss: 279.2383
Epoch 9/100
7/7 [==============================] - 0s 570us/step - loss: 239.4751
Epoch 10/100
7/7 [==============================] - 0s 428us/step - loss: 215.0670
Epoch 11/100
7/7 [==============================] - 0s 570us/step - loss: 197.0394
Epoch 12/100
7/7 [==============================] - 0s 570us/step - loss: 182.4536
Epoch 13/100
7/7 [==============================] - 0s 570us/step - loss: 170.0514
Epoch 14/100
7/7 [==============================] - 0s 570us/step - loss: 159.3298
Epoch 15/100
7/7 [==============================] - 0s 570us/step - loss: 149.8262
Epoch 16/100
7/7 [==============================] - 0s 427us/step - loss: 139.9074
Epoch 17/100
7/7 [==============================] - 0s 855us/step - loss: 131.7229
Epoch 18/100
7/7 [==============================] - 0s 570us/step - loss: 124.4577
Epoch 19/100
7/7 [==============================] - 0s 571us/step - loss: 117.7770
Epoch 20/100
7/7 [==============================] - 0s 570us/step - loss: 111.0864
Epoch 21/100
7/7 [==============================] - 0s 570us/step - loss: 104.7219
Epoch 22/100
7/7 [==============================] - 0s 570us/step - loss: 98.1277
Epoch 23/100
7/7 [==============================] - 0s 570us/step - loss: 91.1421
Epoch 24/100
7/7 [==============================] - 0s 570us/step - loss: 83.4391
Epoch 25/100
7/7 [==============================] - 0s 427us/step - loss: 74.7989
Epoch 26/100
7/7 [==============================] - 0s 570us/step - loss: 65.6966
Epoch 27/100
7/7 [==============================] - 0s 570us/step - loss: 56.7253
Epoch 28/100
7/7 [==============================] - 0s 427us/step - loss: 49.0983
Epoch 29/100
7/7 [==============================] - 0s 427us/step - loss: 42.7818
Epoch 30/100
7/7 [==============================] - 0s 570us/step - loss: 37.4814
Epoch 31/100
7/7 [==============================] - 0s 570us/step - loss: 33.5353
Epoch 32/100
7/7 [==============================] - 0s 427us/step - loss: 29.5835
Epoch 33/100
7/7 [==============================] - 0s 570us/step - loss: 26.4914
Epoch 34/100
7/7 [==============================] - 0s 427us/step - loss: 23.8716
Epoch 35/100
7/7 [==============================] - 0s 570us/step - loss: 21.7155
Epoch 36/100
7/7 [==============================] - 0s 427us/step - loss: 19.7285
Epoch 37/100
7/7 [==============================] - 0s 570us/step - loss: 18.1305
Epoch 38/100
7/7 [==============================] - 0s 570us/step - loss: 16.7709
Epoch 39/100
7/7 [==============================] - 0s 712us/step - loss: 15.4289
Epoch 40/100
7/7 [==============================] - 0s 427us/step - loss: 14.3171
Epoch 41/100
7/7 [==============================] - 0s 570us/step - loss: 13.3281
Epoch 42/100
7/7 [==============================] - 0s 570us/step - loss: 12.5126
Epoch 43/100
7/7 [==============================] - 0s 712us/step - loss: 11.6662
Epoch 44/100
7/7 [==============================] - 0s 427us/step - loss: 10.9719
Epoch 45/100
7/7 [==============================] - 0s 712us/step - loss: 10.3178
Epoch 46/100
7/7 [==============================] - 0s 569us/step - loss: 9.7598
Epoch 47/100
7/7 [==============================] - 0s 570us/step - loss: 9.2922
Epoch 48/100
7/7 [==============================] - 0s 570us/step - loss: 8.8203
Epoch 49/100
7/7 [==============================] - 0s 570us/step - loss: 8.2782
Epoch 50/100
7/7 [==============================] - 0s 570us/step - loss: 7.8635
Epoch 51/100
7/7 [==============================] - 0s 570us/step - loss: 7.5340
Epoch 52/100
7/7 [==============================] - 0s 712us/step - loss: 7.1670
Epoch 53/100
7/7 [==============================] - 0s 570us/step - loss: 6.8308
Epoch 54/100
7/7 [==============================] - 0s 712us/step - loss: 6.5559
Epoch 55/100
7/7 [==============================] - 0s 570us/step - loss: 6.3188
Epoch 56/100
7/7 [==============================] - 0s 570us/step - loss: 6.0876
Epoch 57/100
7/7 [==============================] - 0s 570us/step - loss: 5.8602
Epoch 58/100
7/7 [==============================] - 0s 570us/step - loss: 5.6491
Epoch 59/100
7/7 [==============================] - 0s 570us/step - loss: 5.5389
Epoch 60/100
7/7 [==============================] - 0s 713us/step - loss: 5.3685
Epoch 61/100
7/7 [==============================] - 0s 570us/step - loss: 5.2142
Epoch 62/100
7/7 [==============================] - 0s 570us/step - loss: 5.1113
Epoch 63/100
7/7 [==============================] - 0s 712us/step - loss: 4.9921
Epoch 64/100
7/7 [==============================] - 0s 570us/step - loss: 4.7761
Epoch 65/100
7/7 [==============================] - 0s 570us/step - loss: 4.6978
Epoch 66/100
7/7 [==============================] - 0s 570us/step - loss: 4.6230
Epoch 67/100
7/7 [==============================] - 0s 570us/step - loss: 4.4861
Epoch 68/100
7/7 [==============================] - 0s 427us/step - loss: 4.3929
Epoch 69/100
7/7 [==============================] - 0s 712us/step - loss: 4.3185
Epoch 70/100
7/7 [==============================] - 0s 570us/step - loss: 4.2466
Epoch 71/100
7/7 [==============================] - 0s 570us/step - loss: 4.2021
Epoch 72/100
7/7 [==============================] - 0s 570us/step - loss: 4.1250
Epoch 73/100
7/7 [==============================] - 0s 570us/step - loss: 4.0603
Epoch 74/100
7/7 [==============================] - 0s 427us/step - loss: 4.0125
Epoch 75/100
7/7 [==============================] - 0s 570us/step - loss: 3.9674
Epoch 76/100
7/7 [==============================] - 0s 427us/step - loss: 3.9354
Epoch 77/100
7/7 [==============================] - 0s 570us/step - loss: 3.8740
Epoch 78/100
7/7 [==============================] - 0s 570us/step - loss: 3.8515
Epoch 79/100
7/7 [==============================] - 0s 427us/step - loss: 3.7782
Epoch 80/100
7/7 [==============================] - 0s 570us/step - loss: 3.7827
Epoch 81/100
7/7 [==============================] - 0s 427us/step - loss: 3.6813
Epoch 82/100
7/7 [==============================] - 0s 712us/step - loss: 3.6961
Epoch 83/100
7/7 [==============================] - 0s 570us/step - loss: 3.6135
Epoch 84/100
7/7 [==============================] - 0s 570us/step - loss: 3.5916
Epoch 85/100
7/7 [==============================] - 0s 712us/step - loss: 3.5429
Epoch 86/100
7/7 [==============================] - 0s 855us/step - loss: 3.4860
Epoch 87/100
7/7 [==============================] - 0s 712us/step - loss: 3.4619
Epoch 88/100
7/7 [==============================] - 0s 712us/step - loss: 3.4130
Epoch 89/100
7/7 [==============================] - 0s 712us/step - loss: 3.4062
Epoch 90/100
7/7 [==============================] - 0s 997us/step - loss: 3.3628
Epoch 91/100
7/7 [==============================] - 0s 712us/step - loss: 3.3915
Epoch 92/100
7/7 [==============================] - 0s 570us/step - loss: 3.3101
Epoch 93/100
7/7 [==============================] - 0s 570us/step - loss: 3.2660
Epoch 94/100
7/7 [==============================] - 0s 712us/step - loss: 3.2683
Epoch 95/100
7/7 [==============================] - 0s 570us/step - loss: 3.2404
Epoch 96/100
7/7 [==============================] - 0s 712us/step - loss: 3.1875
Epoch 97/100
7/7 [==============================] - 0s 570us/step - loss: 3.1738
Epoch 98/100
7/7 [==============================] - 0s 570us/step - loss: 3.1274
Epoch 99/100
7/7 [==============================] - 0s 712us/step - loss: 3.0989
Epoch 100/100
7/7 [==============================] - 0s 712us/step - loss: 3.0634
[[21.974901]
 [ 9.544312]
 [ 8.109809]]

Process finished with exit code 0

先从预测值的损失度来说,我给的测试集为三组,每组三个值,提取自训练数据集,所得结果差值 <= 2
其次,可见损失值(loss)在不断下降趋于平稳。
从神经网络来说,数据集是一个三元输入,即需要三个输入神经元,隐藏层为20个神经元,输入神经元是一个。输出显示隐藏层有80个参数,即 有20 个神经元,每个都有一个权重(weights),然后有3个输入,即 20 * 3 = 60 个参数是权重,加上每个神经元的的偏置(bias),20 *3 + 20 = 80 个参数。

一个简单的3 * 20 *1 的神经网络诞生了。

四、summarize

搭建神经网络,我认为选取合适的隐藏层数量,和合适的优化器比较重要。
然后就是各种数学问题的理解,和tensorflow的运用。

真香。嘿嘿!
通过此文来终结我入门的坎坷,往后一定好好学习数学。
我来了,基于神经网络的物联网。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值