![](https://img-blog.csdnimg.cn/20201014180756925.png?x-oss-process=image/resize,m_fixed,h_64,w_64)
DL
文章平均质量分 96
Asthestarsfalll
警惕知识的优越感
展开
-
Demystifying-Local-Vision-Transformer
论文名称:Demystifying Local Vision Transformer: Sparse Connectivity, Weight Sharing, and Dynamic Weight作者:Qi Han1,Zejia Fan,Qi Dai,Lei Sun,Ming-Ming Cheng,Jiaying Liu,Jingdong WangCode:https://github.com/Atten4Vis/DemystifyLocalViT/介绍本文的主要成果发现(finding)如下.原创 2021-10-24 21:10:10 · 870 阅读 · 0 评论 -
PP-LCNet: A Lightweight CPU Convolutional Neural Network
轻量级Trick的优化组合。论文名称:PP-LCNet: A Lightweight CPU Convolutional Neural Network作者:Cheng Cui, Tingquan Gao, Shengyu Wei,Yuning Du…Code:https://github.com/PaddlePaddle/PaddleClas摘要总结了一些在延迟(latency)几乎不变的情况下精度提高的技术;提出了一种基于MKLDNN加速策略的轻量级CPU网络,即PP-LCNet。原创 2021-10-23 15:46:54 · 1221 阅读 · 0 评论 -
Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
分层Local Vision Transformer,通用主干网络,各类下游任务实现SOTA。论文名称:Swin Transformer: Hierarchical Vision Transformer using Shifted Windows作者:Ze Liu ,Yutong Lin,Yue Cao,Han Hu,Yixuan Wei,Zheng Zhang,Stephen Lin,Baining GuoCode:https://github.com/microsoft/Swin-Transf原创 2021-10-23 15:31:23 · 242 阅读 · 0 评论 -
CCNet: Criss-Cross Attention for Semantic Segmentation
论文名称:CCNet: Criss-Cross Attention for Semantic Segmentation作者:Zilong Huang,Xinggang Wang Yun,chao Wei,Lichao Huang,Wenyu Liu,Thomas S. HuangCode:https://github.com/speedinghzl/CCNet摘要上下文信息在视觉理解问题中至关重要,譬如语义分割和目标检测;本文提出了一种十字交叉的网络(Criss-Cross Net)以非常高效.原创 2021-08-19 21:29:25 · 978 阅读 · 2 评论 -
FcaNet: Frequency Channel Attention Networks
论文名称:FcaNet: Frequency Channel Attention Networks作者:Zequn Qin, Pengyi Zhang, Fei Wu, Xi LiCode:https://github.com/cfzd/FcaNet摘要通道注意力在计算机视觉领域取得了重大成功,许多工作都致力于设计更加高效的通道注意力模块,而忽略了一个问题,使用全局平均池化作为预处理。基于频率分析,本文从数学上证明了全局平均池化是频域特征分解的特例。在此基础上,推广了频域中的通道.原创 2021-08-14 16:44:33 · 543 阅读 · 0 评论 -
SimAM: A Simple, Parameter-Free Attention Module for Convolutional Neural Networks
论文名称:SimAM: A Simple, Parameter-Free Attention Module for Convolutional Neural Networks作者:Lingxiao Yang, Ru-Yuan Zhang, Lida Li, Xiaohua XieCode:https://github.com/ZjjConan/SimAM介绍本文提出了一种简单有效的3D注意力模块,基于著名的神经科学理论,提出了一种能量函数,并且推导出其快速解析解,能够为每一个神经元分配权重。主要.原创 2021-08-02 16:40:58 · 2323 阅读 · 14 评论