A Survey on Semi-, Self- and Unsupervised Learning in Image Classification

Deep learning strategies achieve outstanding successes in computer vision tasks. They reach the best performance in a diverse range of tasks such as image classification [1, 2, 3], object detection or semantic segmentation.

The quality of a deep neural network is strongly influenced by the number of labeled/supervised images. ImageNet is a huge labeled dataset with over one million images which allows the training of networks with impressive performance. Recent research shows that even larger datasets than ImageNet can improve these results. However, in many realworld applications it is not possible to create labeled datasets with millions of images. A common strategy for dealing with this problem is transfer learning. This strategy improves results even on small and specialized datasets like medical imaging. This might be a practical workaround for some applications but the fundamental issue remains: Unlike humans, supervised learning needs enormous amounts of labeled data.

 For a given problem we often have access to a large dataset of unlabeled data. How this unsupervised data could be used for neural networks has been of research interest for many years. Xie et al. were among the first in 2016 to investigate unsupervised deep learning image clustering strategies to leverage this data. Since then, the usage of unlabeled data has been researched in numerous ways and has created research fields like unsupervised, semi-supervised, self-supervised, weaklysupervised, or metric learning. Generally speaking, unsupervised learning uses no labeled data, semisupervised learning uses unlabeled and labeled while self-supervised learning generates labeled data on its own. Other research directions are even more different because weakly-supervised learning uses only partial information about the label and metric learning aims at learning a good distance metric. The idea that unifies these approaches is that using unlabeled data is beneficial during the training process. It either makes the training with fewer labels more robust or in some rare cases even surpasses the supervised cases.

Due to this benefit, many researchers and companies work in the field of semi-, self-, and unsupervised learning. The main goal is to close the gap between semi-supervised and supervised learning or even surpass these results. Considering presented methods like we believe that research is at the breaking point of achieving this goal. Hence, there is a lot of research ongoing in this field. This survey provides an overview to keep track of the major and recent developments in semi-, self-, and unsupervised learning.

Most investigated research topics share a variety of common ideas while differing in goal, application contexts, and implementation details. This survey gives an overview of this wide range of research topics. The focus of this survey is on describing the similarities and differences between the methods.

Whereas we look at a broad range of learning strategies, we compare these methods only based on the image classification task. The addressed audience of this survey consists of deep learning researchers or interested people with comparable preliminary knowledge who want to keep track of recent developments in the field of semi-, self- and unsupervised learning.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值