介绍 (Introduction)
The MixUp idea was introduced back in 2018 in this paper and was immediately taken into pipelines by many ML researchers. The implementation of MixUp is really simple, but still it can bring a huge benefit to your model performance.
MixUp想法于2018年在本文中引入,并立即被许多机器学习研究人员引入管道中。 MixUp的实现确实很简单,但是仍然可以为您的模型性能带来巨大的好处。
MixUp can be represented with this simple equation:
MixUp可以用以下简单方程式表示:
newImage = alpha * image1 + (1-alpha) * image2
This newImage is simply a blend of 2 images from your training set, it is that simple! So, what will be the target value for the newImage?
这个newImage只是您训练集中的2张图像的混合,就这么简单! 那么, newImage的目标值是什么 ?
newTarget = alpha * target1 + (1-alpha) * target2
The important thing here, is that you don’t always need to One Hot Encode your target vector. In case you are not doing OneHotEncoding, custom loss function will be required. I will explain it