EMP-SSL: TOWARDS SELF-SUPERVISED LEARNING IN ONETRAINING EPOCH

本文提出了一种名为EMP-SSL的新方法,它改进了自我监督学习的效率,通过增加每个图像实例的裁剪数量,减少了训练迭代次数。EMP-SSL在不超过一个训练周期内就能在多个数据集上达到高性能,并且在迁移学习方面优于基线方法。
摘要由CSDN通过智能技术生成

Recently, self-supervised learning (SSL) has achieved tremendous success in learning image representation. Despite the empirical success, most self-supervised learning methods are rather “inefficient” learners, typically taking hundreds of training epochs to fully converge. In this work, we
show that the key towards efficient self-supervised learning is to increase the number of crops from
each image instance. Leveraging one of the state-of-the-art SSL method, we introduce a simplistic form of self-supervised learning method called Extreme-Multi-Patch Self-Supervised-Learning
(EMP-SSL) that does not rely on many heuristic techniques for SSL such as weight sharing between
the branches, feature-wise normalization, output quantization, and stop gradient, etc, and reduces
the training epochs by two orders of magnitude. We show that the proposed method is able to
converge to 85.1% on CIFAR-10, 58.5% on CIFAR-100, 38.1% on Tiny ImageNet and 58.5% on
ImageNet-100 in just one epoch. Furthermore, the proposed method achieves 91.5% on CIFAR-10,
70.1% on CIFAR-100, 51.5% on Tiny ImageNet and 78.9% on ImageNet-100 with linear probing
in less than ten training epochs. In addition, we show that EMP-SSL shows significantly better
transferability to out-of-domain datasets compared to baseline SSL methods. We will release the
code in https://github.com/tsb0601/EMP-SSL.

https://arxiv.org/pdf/2304.03977.pdf

 

 

 

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值