激励函数
常见的激励函数有:线性激励函数、阈值或阶跃激励函数、S形激励函数、双曲正切激励函数和高斯激励函数等。
TensorFlow三种常用激励 relu sigmoid tanh 本次利用relu激励进行离散点拟合
激励函数(activation function)是一群空间魔法师,扭曲翻转特征空间,在其中寻找线性的边界。
激励函数可以引入非线性因素,解决线性模型所不能解决的问题
卷积神经网络中激励函数应用
完整代码实现
# 激励函数
# 三种激励 relu sigmoid tanh
# 多层神经网络时 激励选择需慎重 会导致梯度消失或梯度爆炸
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
def add_layer(inputs, inSize, outSize, activationFunction=None):
# 输入层
with tf.name_scope('Inputs'):
weights = tf.Variable(tf.random_normal([inSize, outSize])) # 矩阵
biases = tf.Variable(tf.zeros([1, outSize]) + 0.1)
plusResult = tf.matmul(tf.cast(inputs, tf.float32), weights) + biases
if activationFunction is None:
outputs = plusResult
else:
outputs = activationFunction(plusResult)
return outputs
xData = np.linspace(-1, 1, 300)[:, np.newaxis]
noise = np.random.normal(0, 0.05, xData.shape)
yData = np.square(xData) - 0.5 + noise
# 输入层
with tf.name_scope('Inputs'):
xs = tf.placeholder(tf.float32, [None, 1], name='xInput')
ys = tf.placeholder(tf.float32, [None, 1], name='yInput')
# relu激励
layer1 = add_layer(xData, 1, 10, activationFunction=tf.nn.relu)
predition = add_layer(layer1, 10, 1, activationFunction=None)
loss = tf.square(ys - predition)
add = tf.reduce_sum(loss, reduction_indices=[1])
averloss = tf.reduce_mean(add)
trainStep = tf.train.GradientDescentOptimizer(0.1).minimize(averloss) # 训练趋向
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.scatter(xData, yData)
plt.ion()
plt.show()
for i in range(1000):
sess.run(trainStep, feed_dict={xs: xData, ys: yData})
if i % 50 == 0:
try:
ax.lines.remove(lines[0])
except Exception:
pass
predictionValue = sess.run(predition, feed_dict={xs: xData})
lines = ax.plot(xData, predictionValue, 'r-', lw=5)
plt.pause(0.1)
注意:如果用Pycharm的同学出不来效果,请顺次点击File->Settings->Tools->Python Scientific->Show plots in tool window选项的勾打没即可