卷积运算的定义
参考:https://www.cnblogs.com/lhuser/p/8414759.html
卷积运算的动机
参考:https://www.jianshu.com/p/e3824e8fd115
一维卷积运算和二维卷积运算
参考:https://www.cnblogs.com/dasein/p/5692153.html
练习代码:
#coding = utf-8 #Author:Shanv #function: import pandas as pd import numpy as np import datetime import tensorflow as tf def get_conv1d(x_batch,weight,conv1_stride): conv = tf.nn.conv1d(x_batch,weight,stride=conv1_stride,padding='SAME') return conv def get_conv2d(x_batch,weight,conv2_stride): conv = tf.nn.conv2d(x_batch,weight,strides=conv2_stride,padding='SAME') return conv if __name__ == '__main__': startTime = datetime.datetime.now() print('start') #定义一个1维需要被卷积的张量 tensor1 = np.array((np.arange(1, 1+20)).reshape([1, 10, 2]),dtype=np.float32) #定义一个卷积核,这里卷积核数目为1, 步长也为1, 大小为2 kernel1 = np.array(np.arange(1, 1+4), dtype=np.float32).reshape([2, 2, 1]) #进行一维(conv1d)卷积,步长为1 cov1d = get_conv1d(tensor1, kernel1, 1) with tf.Session() as sess1: #初始化参数 tf.global_variables_initializer().run() #输出卷积值 print('一维卷积值为:') print(sess1.run(cov1d)) #定义第二个维需要被卷积的张量(shape=(1,5,5,3)) tensor2 = tf.constant([ [ [[0.0, 1.0, 2.0], [1.0, 1.0, 0.0], [1.0, 1.0, 2.0], [2.0, 2.0, 0.0], [2.0, 0.0, 2.0]], [[0.0, 0.0, 0.0], [1.0, 2.0, 0.0], [1.0, 1.0, 1.0], [0.0, 1.0, 2.0], [0.0, 2.0, 1.0]], [[1.0, 1.0, 1.0], [1.0, 2.0, 0.0], [0.0, 0.0, 2.0], [1.0, 0.0, 2.0], [0.0, 2.0, 1.0]], [[1.0, 0.0, 2.0], [0.0, 2.0, 0.0], [1.0, 1.0, 2.0], [1.0, 2.0, 0.0], [1.0, 1.0, 0.0]], [[0.0, 2.0, 0.0], [2.0, 0.0, 0.0], [0.0, 1.0, 1.0], [1.0, 2.0, 1.0], [0.0, 0.0, 2.0]], ] ]) #定义一个卷积核,这里卷积核数目为1, 大小为(3,3) kernel2 = tf.constant([ [ [[1.0, -1.0, 0.0], [1.0, 0.0, 1.0], [-1.0, -1.0, 0.0]], [[-1.0, 0.0, 1.0], [0.0, 0.0, 0.0], [1.0, -1.0, 1.0]], [[-1.0, 1.0, 0.0], [-1.0, -1.0, -1.0], [0.0, 0.0, 1.0]], ] ]) #进行2维(conv2d)卷积,步长为[1,1,1,1] cov2d = get_conv2d(tensor2, kernel2, [1,1,1,1]) with tf.Session() as sess2: #初始化参数 tf.global_variables_initializer().run() #输出卷积值 print('二维卷积值为:') print(sess2.run(cov2d)) endTime = datetime.datetime.now() totalTime = (endTime - startTime).seconds print(startTime, '--------', endTime) print('共消耗%d秒' % totalTime)
输出:
一维卷积值为:
[[[ 30.]
[ 50.]
[ 70.]
[ 90.]
[110.]
[130.]
[150.]
[170.]
[190.]
[ 59.]]]
二维卷积值为:
[[[[ 0. -2. 1.]
[-4. -2. 3.]
[-1. -3. 2.]
[-4. -1. 5.]
[ 4. -4. 6.]]
[[-3. -1. -2.]
[-3. 0. 1.]
[ 2. -3. 5.]
[ 1. -6. 2.]
[ 0. -3. 2.]]
[[-3. -2. 0.]
[ 0. -2. 4.]
[ 4. -2. 6.]
[-3. -6. 2.]
[ 0. -4. 1.]]
[[-1. -4. 1.]
[-3. -3. 1.]
[ 0. -3. 3.]
[-3. -3. 1.]
[ 2. -1. 3.]]
[[-2. 2. 0.]
[-1. -1. 4.]
[ 0. -4. 0.]
[ 0. -2. 5.]
[ 4. -4. 4.]]]]
2019-05-23 13:21:44.486630 -------- 2019-05-23 13:21:45.350002
共消耗0秒
池化运算
参考:https://blog.csdn.net/sunflower_sara/article/details/81322048
Text-CNN的原理
参考1:https://blog.csdn.net/chuchus/article/details/77847476
参考2:https://blog.csdn.net/xh999bai/article/details/89483673