Generative Adversarial Networks[论文合集]

https://handong1587.github.io/deep_learning/2015/10/09/gan.html


Jump to...

     1. Image-to-Image Translation

     1. Pix2Pix

     2. Projects

     3. Blogs

     4. Talks / Videos

     5. Resources


Generative Adversarial Networks

Generative Adversarial Nets

Adversarial Feature Learning

Generative Adversarial Networks

Adversarial Examples and Adversarial Training

How to Train a GAN? Tips and tricks to make GANs work

Unsupervised and Semi-supervised Learning with Categorical Generative Adversarial Networks

Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks

InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets

Learning Interpretable Latent Representations with InfoGAN: A tutorial on implementing InfoGAN in Tensorflow

Coupled Generative Adversarial Networks

Energy-based Generative Adversarial Network

SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient

Connecting Generative Adversarial Networks and Actor-Critic Methods

Generative Adversarial Nets from a Density Ratio Estimation Perspective

Unrolled Generative Adversarial Networks

Generative Adversarial Networks as Variational Training of Energy Based Models

Multi-class Generative Adversarial Networks with the L2 Loss Function

Least Squares Generative Adversarial Networks

Inverting The Generator Of A Generative Adversarial Networ

ml4a-invisible-cities

Semi-Supervised Learning with Context-Conditional Generative Adversarial Networks

Associative Adversarial Networks

Temporal Generative Adversarial Nets

Handwriting Profiling using Generative Adversarial Networks

  • intro: Accepted at The Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17 Student Abstract and Poster Program)
  • arxiv: https://arxiv.org/abs/1611.08789

C-RNN-GAN: Continuous recurrent neural networks with adversarial training

Ensembles of Generative Adversarial Networks

Improved generator objectives for GANs

Stacked Generative Adversarial Networks

Unsupervised Pixel-Level Domain Adaptation with Generative Adversarial Networks

AdaGAN: Boosting Generative Models

Towards Principled Methods for Training Generative Adversarial Networks

Wasserstein GAN

Improved Training of Wasserstein GANs

On the effect of Batch Normalization and Weight Normalization in Generative Adversarial Networks

On the Effects of Batch and Weight Normalization in Generative Adversarial Networks

Controllable Generative Adversarial Network

Generative Adversarial Networks: An Overview

  • intro: Imperial College London & Victoria University of Wellington & University of Montreal & Cortexica Vision Systems Ltd
  • intro: IEEE Signal Processing Magazine Special Issue on Deep Learning for Visual Understanding
  • arxiv: https://arxiv.org/abs/1710.07035

CyCADA: Cycle-Consistent Adversarial Domain Adaptation

https://arxiv.org/abs/1711.03213

Spectral Normalization for Generative Adversarial Networks

https://openreview.net/forum?id=B1QRgziT-

Are GANs Created Equal? A Large-Scale Study

GAGAN: Geometry-Aware Generative Adverserial Networks

https://arxiv.org/abs/1712.00684

CycleGAN: a Master of Steganography

PacGAN: The power of two samples in generative adversarial networks

ComboGAN: Unrestrained Scalability for Image Domain Translation

Decoupled Learning for Conditional Adversarial Networks

https://arxiv.org/abs/1801.06790

No Modes left behind: Capturing the data distribution effectively using GANs

Improving GAN Training via Binarized Representation Entropy (BRE) Regularization

On GANs and GMMs

https://arxiv.org/abs/1805.12462

The Unusual Effectiveness of Averaging in GAN Training

https://arxiv.org/abs/1806.04498

Understanding the Effectiveness of Lipschitz Constraint in Training of GANs via Gradient Analysis

https://arxiv.org/abs/1807.00751

The GAN Landscape: Losses, Architectures, Regularization, and Normalization

Which Training Methods for GANs do actually Converge?

Convergence Problems with Generative Adversarial Networks (GANs)

Bayesian CycleGAN via Marginalizing Latent Sampling

https://arxiv.org/abs/1811.07465

GAN Dissection: Visualizing and Understanding Generative Adversarial Networks

https://arxiv.org/abs/1811.10597

Do GAN Loss Functions Really Matter?

https://arxiv.org/abs/1811.09567

Image-to-Image Translation

Pix2Pix

Image-to-Image Translation with Conditional Adversarial Networks

Remastering Classic Films in Tensorflow with Pix2Pix

Image-to-Image Translation in Tensorflow

webcam pix2pix

https://github.com/memo/webcam-pix2pix-tensorflow


Unsupervised Image-to-Image Translation with Generative Adversarial Networks

Unsupervised Image-to-Image Translation Networks

Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks

CycleGAN and pix2pix in PyTorch

Perceptual Adversarial Networks for Image-to-Image Transformation

https://arxiv.org/abs/1706.09138

XGAN: Unsupervised Image-to-Image Translation for many-to-many Mappings

In2I : Unsupervised Multi-Image-to-Image Translation Using Generative Adversarial Networks

https://arxiv.org/abs/1711.09334

StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation

Discriminative Region Proposal Adversarial Networks for High-Quality Image-to-Image Translation

https://arxiv.org/abs/1711.09554

Toward Multimodal Image-to-Image Translation

Face Translation between Images and Videos using Identity-aware CycleGAN

https://arxiv.org/abs/1712.00971

Unsupervised Multi-Domain Image Translation with Domain-Specific Encoders/Decoders

https://arxiv.org/abs/1712.02050

High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs

On the Effectiveness of Least Squares Generative Adversarial Networks

https://arxiv.org/abs/1712.06391

GANs for Limited Labeled Data

Defending Against Adversarial Examples

Conditional Image-to-Image Translation

XOGAN: One-to-Many Unsupervised Image-to-Image Translation

https://arxiv.org/abs/1805.07277

Unsupervised Attention-guided Image to Image Translation

https://arxiv.org/abs/1806.02311

Exemplar Guided Unsupervised Image-to-Image Translation

https://arxiv.org/abs/1805.11145

Improving Shape Deformation in Unsupervised Image-to-Image Translation

https://arxiv.org/abs/1808.04325

Video-to-Video Synthesis

Segmentation Guided Image-to-Image Translation with Adversarial Networks

https://arxiv.org/abs/1901.01569

Projects

Generative Adversarial Networks with Keras

Generative Adversarial Network Demo for Fresh Machine Learning #2

TextGAN: A generative adversarial network for text generation, written in TensorFlow.

cleverhans v0.1: an adversarial machine learning library

Deep Convolutional Variational Autoencoder w/ Adversarial Network

A versatile GAN(generative adversarial network) implementation. Focused on scalability and ease-of-use.

AdaGAN: Boosting Generative Models

TensorFlow-GAN (TFGAN)

Blogs

Generative Adversial Networks Explained

Generative Adversarial Autoencoders in Theano

An introduction to Generative Adversarial Networks (with code in TensorFlow)

Difficulties training a Generative Adversarial Network

Are Energy-Based GANs any more energy-based than normal GANs?

http://www.inference.vc/are-energy-based-gans-actually-energy-based/

Generative Adversarial Networks Explained with a Classic Spongebob Squarepants Episode: Plus a Tensorflow tutorial for implementing your own GAN

Deep Learning Research Review Week 1: Generative Adversarial Nets

Stability of Generative Adversarial Networks

Instance Noise: A trick for stabilising GAN training

Generating Fine Art in 300 Lines of Code

Talks / Videos

Generative Adversarial Network visualization

Resources

The GAN Zoo

AdversarialNetsPapers: The classical Papers about adversial nets

GAN Timeline

 

  • 5
    点赞
  • 63
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
"Feature Statistics Mixing Regularization for Generative Adversarial Networks"这篇论文提出了一种新的生成对抗网络(GAN)的正则化方法,以提高GAN的训练稳定性和生成结果的质量。其模型由以下几个部分组成: 1. 生成器(Generator):利用输入的随机噪声生成图像。 2. 判别器(Discriminator):对生成器生成的图像与真实图像进行分类,以判断图像的真伪。 3. 特征统计量混合正则化(Feature Statistics Mixing Regularization):在生成器和判别器之间引入一种正则化方法,以提高生成器的效果和判别器的鲁棒性。该正则化方法主要涉及到特征统计量(feature statistics)的混合,通过将生成器和判别器中的特征统计量相互混合,来减小它们之间的差异,从而增强网络的鲁棒性和稳定性。 4. 损失函数(Loss Function):利用生成器和判别器的输出计算损失函数,以衡量生成器的效果和判别器的鲁棒性。其中,生成器的损失函数包括生成器输出的图像与真实图像之间的差异(通过像素级别的L1或L2距离来度量),以及生成器输出的图像被判别器判定为真实图像的程度。判别器的损失函数包括判别器输出的图像被正确分类的程度,以及判别器对生成器输出的图像的分类结果。 综上所述,"Feature Statistics Mixing Regularization for Generative Adversarial Networks"的模型包括生成器、判别器、特征统计量混合正则化和损失函数等部分,以提高GAN的训练稳定性和生成结果的质量。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值