[Paper note] Learning from Simulated and Unsupervised Images through Adversarial Training

  • paper
  • This is (probably?) the first paper from Apple in ML/CV field

Contribution

  • Propose Simulate + Unsupervised training, using real world images to refine the synthetic images, with GAN
  • Training GAN using adversarial loss and a self-regularization loss
  • Key modifications to stabilize GAN training and prevent artifacts

Framework

  • Refiner (Generator): x̃ :=Rθ(x)
  • Refiner loss (general formular): R(θ)=ilreal(θ;x̃ i,)+λlreg(θ;x̃ i,xi)
    • lreg minimizing the difference between the synthetic and the refined images
  • Discriminator Loss: D(ϕ)=ilog(Dϕ(x̃ i))jlog(1Dϕ(yj))
    • x~_i, y_j are randomly sampled from refined images and real images sets
  • Algorithm:
    • algorithm
  • L_R loss in the implementation of this paper: R(θ)=ilog(1Dϕ(Rθ(xi)))+λ||Rθ(xi)xi||1

Stabilize GAN training

  • Local adversarial loss: divide the refined image and real image into w x h regions and use separate discriminators to judge each region
    • local adversarial loss
    • Final loss is the sum of loss on each region
  • Using history of refined images
    • Two issues when only use the latest refined images
      • Diverging of adversarial training
      • The refiner network re-introducing the artifacts that the discriminator had forgotten about
    • Buffer history images, use b/2 history images and b/2 newly refined images in each iter
    • Update b/2 of the buffered images in each iter

Experiments

  • Gaze estimation
    • Dataset: MPIIGaze dataset
    • Synthesizer: UnityEyes
    • Visual Turing test: human cannot tell the difference between refined and real images
    • Quantitative result: 22.3% percentage of improvement
  • Hand pose estimation
    • Dataset: NYU hand pose dataset
    • Training CNN: Hour glass network
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值