笔记(3):卷积神经网络(1)

Convolutions

Why We Use Convolutions

Learning features with large features that span the entire image(fully connected networks) is very computationally expensive. So we want another model, locally connected networks, that restrict the connections between the hidden units and the input units. Neurons in the visual cortex have localized receptive fields, so we propose convolutions.

How Convolutions Work

Given some large r×c images, learning k a×b features map.Then we will have a k×(ra+1)×(cb+1) array of convolved features.

Convolutions

Pooling

Why We Use Pooling

The same reason why we use locally connected network not fully connected network. After we apply convolutions, we would have k×(ra+1)×(cb+1) features. If k,r,c are large enough, this can be computationally challenging.

The another reason is if the inputs is too large, it can be prone to over-fitting.

Why Pooling Works

Images have the stationarity property which implies that features that are useful in one region are also likely to be useful for other regions. Thus, to describe a large image, one natural approach is to aggregate statistics of these features at various location. This aggregation operation is called pooling. There are many way to aggregate such mean or max operation.

How Pooling Works

There are several kinds of ways to do this thing. In fact, we can treat it as a non-overlap convolution.

Pooling

参考资料

[1] Convolution
[2] Pooling

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值