硬核·论文解读(1)——YOLOR


论文介绍

论文地址:You Only Learn One Representation: Unified Network for Multiple Tasks.
论文项目地址:https:// github.com/WongKinYiu/yolor
为方便同学们下载,我把pdf格式论文放入了百度云:https://pan.baidu.com/s/1qqrwk_-XuNzQ-4u3CyRAEw
提取码:pr18


论文部分解读(中英对照)


摘要(Abstract)

原文如下:

People “understand” the world via vision, hearing, tactile, and also the past experience. Human experience can be learned through normal learning (we call it explicit knowledge), or subconsciously (we call it implicit knowledge).These experiences learned through normal learning or subconsciously will be encoded and stored in the brain. Using these abundant experience as a huge database, human beings can effectively process data, even they were unseen beforehand. In this paper, we propose a unified network to encode implicit knowledge and explicit knowledge together, just like the human brain can learn knowledge from normal learning as well as subconsciousness learning. The unified network can generate a unified representation to simultaneously serve various tasks. We can perform kernel space alignment, prediction refinement, and multi-task learning in a convolutional neural network. The results demonstrate that when implicit knowledge is introduced into the neural network, it benefits the performance of all tasks. We further analyze the implicit representation learnt from the proposed unified network, and it shows great capability on catching the physical meaning of different tasks. The source code of this work is at : https:// github.com/WongKinYiu/yolor

大致翻译:

大致是说人类与电脑的学习方式不同,人类可以利用正常的学习或潜意识(在他们称为——implicit knowledge:隐式知识)的学习将知识储存在脑中,并根据这些知识,来处理之前没有接触过的东西。他们提出了一个类似于人脑工作机制的的统一网络,同时处理隐式知识与显示知识。他们在神经网络中执行了kernel space alignment(核空间对齐), prediction refinement(预测细化), and multi-task learning(多任务学习)。随后他们发现了当隐式知识引入神经网络后,有利于神经网络的性能提升并对不同任务的物理捕捉更准确。


第一部分:Introduction

原文如下:

The way to construct the above unified networks is to combine compressive sensing and deep learning, and the main theoretical basis can be found in our previous work [16, 17, 18]. In [16], we prove the effectiveness of reconstructing residual error by extended dictionary. In [17, 18], we use sparse coding to reconstruct feature map of a CNN and make it more robust. The contribution of this work are summarized as follows:
1.We propose a unified network that can accomplish vari

评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

赵云战江湖

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值