如何确定卷积神经网络的卷积核大小、卷积层数、每层map个数

卷积核大小 卷积层数确定的原则是
长而深,不知道怎么就选3*3

三层3*3的卷积效果和一层7*7的卷积效果一致,我们知道一次卷积的复杂度是卷积长宽*图像长宽,3次卷积的复杂度为3*(3*3)*图像长宽《(7*7)*图像长宽,既然效果一样,那当然选多次小卷积啊。


卷积层数设置,选最好性能的那个模型,它是几层那就设置几层。这个是训练数据,激活函数,梯度更新算法等多方面的影响,也不是简单就试出来的。

卷积核数目设置

按照16的倍数倍增,结合了gpu硬件的配置。





  • 3
    点赞
  • 27
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
The first CNN appeared in the work of Fukushima in 1980 and was called Neocognitron. The basic architectural ideas behind the CNN (local receptive fields,shared weights, and spatial or temporal subsampling) allow such networks to achieve some degree of shift and deformation invariance and at the same time reduce the number of training parameters. Since 1989, Yann LeCun and co-workers have introduced a series of CNNs with the general name LeNet, which contrary to the Neocognitron use supervised training. In this case, the major advantage is that the whole network is optimized for the given task, making this approach useable for real-world applications. LeNet has been successfully applied to character recognition, generic object recognition, face detection and pose estimation, obstacle avoidance in an autonomous robot etc. myCNN class allows to create, train and test generic convolutional networks (e.g., LeNet) as well as more general networks with features: - any directed acyclic graph can be used for connecting the layers of the network; - the network can have any number of arbitrarily sized input and output layers; - the neuron’s receptive field (RF) can have an arbitrary stride (step of local RF tiling), which means that in the S-layer, RFs can overlap and in the C-layer the stride can differ from 1; - any layer or feature map of the network can be switched from trainable to nontrainable (and vice versa) mode even during the training; - a new layer type: softmax-like M-layer. The archive contains the myCNN class source (with comments) and a simple example of LeNet5 creation and training. All updates and new releases can be found here: http://sites.google.com/site/chumerin/projects/mycnn
卷积神经网络(Convolutional Neural Network,CNN)是一种前馈神经网络,主要用于图像和视频识别、分类、处理等方面。卷积层是CNN中的核心层之一,其作用是对输入据进行特征提取。卷积层内部包含多个卷积核,组成卷积核的每个元素都对应一个权重系和一个偏差量(bias vector),类似于一个前馈神经网络的神经元(neuron)。 卷积层的工作原理是将卷积核与输入据进行卷积运算,得到一个特征映射(feature map)。卷积核大小通常比输入据小,因此在卷积运算中,卷积核会在输入据上滑动,每次滑动一个固定的步长(stride),并对每个位置进行卷积运算,得到一个特征值。这些特征值组成了特征映射,用于表示输入据的不同特征。 卷积层的参包括卷积核大小、步长、填充方式等。填充方式是指在输入据的边缘填充一些值,以便在卷积运算中保持输入据的大小不变。卷积层的输出大小取决于输入据的大小卷积核大小、步长和填充方式等参。 下面是一个卷积层的示例代码: ```python import torch.nn as nn # 定义一个卷积层 conv_layer = nn.Conv2d(in_channels=3, out_channels=16, kernel_size=3, stride=1, padding=1) # 输入据 input_data = torch.randn(1, 3, 32, 32) # 卷积运算 output_data = conv_layer(input_data) # 输出特征映射的大小 print(output_data.size()) # 输出:torch.Size([1, 16, 32, 32]) ```

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值