89.89% on CIFAR-10 in Pytorch

The full code is available here, just clone it to your machine and it’s ready to play. As a former Torch7 user, I attempt to reproduce the results from the Torch7 post.

My friends Wu Jun and Zhang Yujing claimed Batch Normalization[1] useless. I want to prove them wrong (打他们脸), and CIFAR-10 is a nice playground to start.

CIFAR-10 images

CIFAR-10 contains 60 000 labeled for 10 classes images 32x32 in size, train set has 50 000 and test set 10 000.

The dataset is quite small by today’s standards, but still a good playground for machine learning algorithms. I just use horizontal flips to augment data. One would need an NVIDIA GPU with at least 3 GB of memory.

The post and the code consist of 2 parts/files:

  • model definition
  • training

The model Vgg.py

It’s a VGG16-like[2] (not identical, I remove the first FC layer) network with many 3x3 filters and padding 1,1 so the sizes of feature maps after them are unchanged. They are only changed after max-pooling. Weights of convolutional layers are initialized MSR-style. Batch Normalization and Dropout are used together.

Training train.py

That’s it, you can start training:

python train.py

The parameters with which models achieves the best performance are default in the code. I used SGD (a little out-of-date) with cross-entropy loss with learning 0.01, momentum 0.9 and weight decay 0.0005, dropping learning rate every 25 epochs. After a few hours you will have the model. The accuracy record and models at each checkpoint are saved in ‘save’ folder.

How accuracy improves:
CIFAR-10 Accuracy

The best accuracy is 89.89%, removing BN or Dropout results in 88.67% and 88.73% accuracy, respectively. Batch Normalization can accelerate deep network training. Removing BN and Dropout results in 86.65% accuracy and we can observe the overfitting.

References

  1. Sergey Ioffe, Christian Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. [arxiv]
  2. K. Simonyan, A. Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recognition [arxiv]
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
好的,我已经获取到您提供的数据集,并进行了处理。以下是使用MATLAB绘制二元一次线性回归方程图像,并在图像上显示其函数方程,以及regress函数的相关系数的代码: ``` % 导入数据 data = [45 6.05 1.992523333; 79.92 10.75 2.65896; 74.98 12 2.65389; 85.02 15 2.775686667; 94.8 15.06 2.90092; 101.96 18.9 3.009513333; 132.21 20.03 3.28361; 150.13 20.19 3.420923333; 164 27 3.55357; 54.98 29.99 3.376823333; 60 35 3.13027; 267.3 35.94 4.417276667; 50 45 3.850093333; 170.18 45.02 3.644676667; 113.16 46.97 3.55607; 143.76 54.93 3.435613333; 43.02 66.02 3.30208; 64.97 69.97 3.04911; 138.11 82.01 3.421266667; 117.17 85.02 3.406946667; 94.89 89.99 3.302596667; 105.14 115.02 3.12797; 65 160.21 2.43186; 24.99 167.94 2.30055; 33 170 2.115136667; 20 177.92 1.88362; 12 184.89 1.448133333; 18 209.93 0.95445; 2 219.9 0.95877; 0 225.19 0.96111; 14.98 193.68 0.9908; 49.99 2.14 1.165916667; 42.98 158.95 1.892506667; 59.97 144.32 2.258923333; 75 132.8 2.591853333; 84.93 112 2.610756667; 89.89 101.36 2.656026667; 102.07 95.16 2.75372; 110 87.44 3.144476667; 113.12 36.11 3.138243333; 115.05 88.93 3.185426667; 119.82 73.59 3.18033; 123.92 78.13 3.257113333; 129.93 17.52 3.250706667; 148.04 5.76 3.298936667; 150.09 55.55 3.42733; 157.14 0 3.413393333; 157.9 1.82 3.434336667; 170.08 23.63 3.529443333; 175.16 0 3.53233; 177.98 32.91 3.597996667; 183.94 0 3.590293333; 188.04 6.97 3.58621; 195 4.27 3.625983333; 202 0 3.645556667; 214.98 16.36 3.783923333; 0 11 -0.018623333; 12.99 21.99 0.42141; 23.01 33.01 0.655883333]; % 提取自变量和因变量 x = data(:, 2); y = data(:, 3); % 计算二元一次线性回归方程 b = regress(y, [ones(length(x), 1), x]); yfit = b(1) + b(2)*x; % 绘制散点图和回归直线 scatter(x, y, 'filled'); hold on; plot(x, yfit, 'LineWidth', 2); % 显示函数方程和相关系数 str = sprintf('y = %.4f + %.4fx\nR^2 = %.4f', b(1), b(2), corr(x, y)^2); text(2, 4.5, str); % 调整图形参数 title('二元一次线性回归方程图像'); xlabel('x'); ylabel('y'); grid on; ``` 运行以上代码,您将得到一个二元一次线性回归方程图像,该图像包含散点图和回归直线,并在图像上显示了函数方程和相关系数。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值