Residual Attention Network for Image Classification(原文阅读)

 Abstract

     In this work, we propose “Residual Attention Network”, a convolutional neural network using attention mechanism which can incorporate with state-of-art feed forward net-work architecture in an end-to-end training fashion(能干啥). Our Residual Attention Network is built by stacking Attention Modules which generate attention-aware features. The attention-aware features from different modules change adaptively as layers going deeper. Inside each Attention Module, bottom-up top-down feedforward structure is used to unfold the feedforward and feedback attention process into a single feedforward process(怎么干). Importantly, we propose attention residual learning to train very deep Residual At-tention Networks which can be easily scaled up to hundreds of layers(优点).

      Extensive analyses are conducted on CIFAR-10 and CIFAR-100 datasets to verify the effectiveness of every mod-
ule mentioned above. Our Residual Attention Network achieves state-of-the-art object recognition performance on
three benchmark datasets including CIFAR-10 (3.90% er-ror), CIFAR-100 (20.45% error) and ImageNet (4.8% single
model and single crop, top-5 error). Note that, our method achieves 0.6% top-1 accuracy improvement with 46% trunk depth and 69% forward FLOPs comparing to ResNet-200. The experiment also demonstrates that our network is ro-bust against noisy labels(实验证明).

Introduction

    Not only a friendly face but also red color will draw our attention. The mixed nature of attention has been studied extensively in the previous literatures [34, 16, 23, 40]. Attention not only serves to select a focused location but also enhances different representations of objects at that location. Previous works formulate attention drift as a sequential process to capture different attended aspects. However,as far as we know, no attention mechanism has been applied to feedforward network structure to achieve state-of-art results in image classification task(为啥要做啊,因为这方面没人做啊). Recent advances of image classification focus on training feedforward convolutional neural networks using “very deep” structure [27, 33, 10].Inspired by the attention mechanism and recent advances in the deep neural network(想法来源), we propose Residual Attention Network, a convolutional network that adopts mixed attention mechanism in “very deep” structure. The Residual Attention Network is composed of multiple Attention Modules which generate attention-aware features. The attentionaware features from different modules change adaptively as layers going deeper.

     Apart from more discriminative feature representation brought by the attention mechanism, our model also exhibits following appealing properties: (1) Increasing Attention Modules lead to consistent performance improvement, as different types of attention are captured extensively. Fig.1 shows an example of different types of attentions for a hot air balloon image. The sky attention mask diminishes background responses while the balloon instance mask highlighting the bottom part of the balloon. (2) It is able to incorporate with state-of-the-art deep network structures in an end-to-end training fashion. Specifically, the depth of our network can be easily extended to hundreds of layers. Our Residual Attention Network outperforms state-of-the-art residual networks on CIFAR-10, CIFAR-100 and challenging ImageNet [5] image classification dataset with significant reduction of computation (69% forward FLOPs)(效果).

      All of the aforementioned properties, which are challenging to achieve with previous approaches, are made possible with following contributions: (1) Stacked network structure: Our Residual Attention Network is constructed by stacking multiple Attention Modules. The stacked structure is the basic application of mixed attention mechanism. Thus, different types of attention are able to be captured in different Attention Modules.(2) Attention Residual Learning: Stacking Attention Modules directly would lead to the obvious performance drop. Therefore, we propose attention residual learning mechanism to optimize very deep Residual Attention Network with hundreds of layers. (3) Bottom-up top-down feedforward attention: Bottom-up top-down feedforward structure has been successfully applied to human pose estimation [24] and image segmentation [22, 25, 1]. We use such structure as part of Attention Module to add soft weights on features. This structure can mimic bottom-up fast feedforward process and top-down attention feedback in a single feedforward process which allows us to develop an end-to-end trainable network with top-down attention. The bottom-up top-down structure in our work differs from stacked hourglass network [24] in its intention of guiding feature learning.(贡献)

Related Work

 

 

  • 2
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值