01 对卷积神经网络的一些理解
- 全链接方式参数太多;因参数量大,导致计算量过大;随着层数的增多,可计算性变小
- 卷积核、局部链接、感受野、权值共享、滑动步长、边缘填充、卷积核个数
- 局部链接和权值共享大大降低了参数数量
- 池化层(下采样)起到特征增强和数据压缩的作用
- 卷积是一个特征提取过程
- 卷积神经网络(cnn)比全部采用全链接的神经网络(nn)判断准确。因为参数本身也有判断噪音;随着参数的增多,在loss达到足够小,差不多和参数引入的噪音影响相当时,判断就被噪音覆盖了。cnn的参数和数据相比,足够小,噪音影响远远小于nn,所以,cnn的判断准确性要高于nn。
02 卷积层、池化层样例
import tensorflow as tf
import numpy as np
M = np.array([
[[1],[-1],[0]],
[[-1],[2],[1]],
[[0],[2],[-2]]
])
print("Matrix shape is: ",M.shape)
filter_weight = tf.get_variable('weights', [2, 2, 1, 1], initializer = tf.constant_initializer([
[1, -1],
[0, 2]]))
biases = tf.get_variable('biases', [1], initializer = tf.constant_initializer(1))
M = np.asarray(M, dtype='float32')
M = M.reshape(1, 3, 3, 1)
x = tf.placeholder('float32', [1, None, None, 1])
conv = tf.nn.conv2d(x, filter_weight, strides=[1, 2, 2, 1], padding='SAME')
bias = tf.nn.bias_add(conv, biases)
pool = tf.nn.avg_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
with tf.Session() as sess:
tf.global_variables_initializer().run()
convoluted_M = sess.run(bias, feed_dict={x: M})
pooled_M = sess.run(pool, feed_dict={x: M})
print("convoluted_M: \n", convoluted_M)
print("pooled_M: \n", pooled_M)
'''
convoluted_M:
[[[[ 7.][ 1.]]
[[-1.][-1.]]]]
pooled_M:
[[[[ 0.25] [ 0.5 ]]
[[ 1. ] [-2. ]]]]
'''