Kervolutional Neural Networks

参考:https://www.jianshu.com/p/21d23b987586

 

出处: CVPR2019

解决的问题:将CNN应用于non-linear space

 

2 提出的问题

(1)卷积层不适应于非线性数据

(2)非线性层(ReLU)是point-wise 非线性,而使用Patch-wise 非线性效果可能更好

 

3 要求

(1)不损坏卷积层的权重共享特性(sharing weights (weight sparsity) )

(2)计算复杂度低

4实现

 

 

 


Figure 1. The comparison of learned filters on MNIST from the first layer (six channels and filter size of 5 × 5) of CNN and polynomial KNN. It is interesting that some of the learned filters (e.g. channel 4) from KNN are quite similar to CNN. This indicates that part of the kervolutional layer learns linear behavior, which is controlled by the linear part of the polynomial kernel. 

Figure 2 (a) and (b), although the computational complexity of non-linear kernels is slightly higher than that of linear
kernel (convolution), the polynomial and Gaussian KNN are still able to converge to a validation accuracy of 98% more
than 2× faster than the original CNN. However, the conver-gence speed of sigmoid KNN is 2× slower than that of CNN,
which indicates that the kernel functions are crucial and have a significant impact on performance. Thanks to the wealth
of traditional methods, we have many other useful kernels[47], although we cannot test all of them in this paper. The
L1and L2-norm KNN achieve an accuracy of 99.05% and 99.19%, respectively, but we omit them in Figure 2 (a) and
(b) because they nearly coincide with the polynomail curve 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值