卷积神经网络调参经验

本文探讨了二值化神经网络中精度下降的问题,引用论文指出较低的学习率更适合训练BNN以避免频繁的权重符号变化。同时,列举了多个关于神经网络调参的资料和技巧,包括数据预处理、初始化、反向传播优化、激活函数选择、特征工程等方面,旨在提升深度学习模型的性能和效率。
摘要由CSDN通过智能技术生成

实习遇到的问题吧 二值化网络加入XNor操作后,精度下降严重,更改第一层最后一层不使用Xnor,问题仍未解决,一时无从下手。

犹犹豫豫看到一篇paper
How to Train a Compact Binary Neural Network with High Accuracy

To consolidate our analysis, we have the statistics on how
many weights are changing their signs under different learning
rates during BNN and full precision network training.
The results are shown in Figure 2. It can be observed that,
under the same learning rate of 0.01, the sign changes for
BNN is nearly 3 orders of magnitude larger than that of a
full precision network. Only when the learning rate of BNN
is lowered to 0.0001, the two results become close.
Hence we conclude that a lower learning rate is more
preferred for training BNN to avoid frequent sign changes.
Our experiments show that when the learning rate is lowered
to 0.0001, a tremendous accuracy gain is obtained and
BNN

  • 2
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值