深度学习笔记第二周:深度卷积神经网络

Classic Networks:

LeNet(Handwritten digits recognition),AlexNet,  VGGNet

Residual Network:

In order to get better performance of the deep network, we inject residual blocks into the plain network.

As we all know, the deeper the network, the more accurate the predictions. But also it's getting harder to train with the depth increasing and errors can be higher, too. Thus we introduce ResNet to fix the problem.

As for the structure of the ResNet, it is simply add some shortcuts from one output a[l] to a deeper layer. And then add the a[l] to the linear activation output of this layer. Compute the non-linear activation(such as Relu) of the former result.

One by One Convolution: shrink the number of channels

Using 1x1 convolution will not change the height and the width of input data, but will change the channel of the input image. So if you want to reduce the computation cost, you can try 1x1 convolution by changing the channels.

Inception Network:

If you are not sure about the size of the filters or whether to use a pooling later, you can just concat the channels to form a multiple block, which is called an inception module. 

Transfer learning

 

Data Augmentation:

Common Augmentation Method:

A. Mirroring Opration: By flipping an image horizontally.

B. Random Cropping(随机裁剪)

C. Color Shifting: Distort the color channels by changing values of RGB channels.

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值