无监督学习的方法:聚类、自编码、深度置信网络
生成自然图像的各类方式:
1. 非参数~The non-parametric models often do matching from a database of existing images, often matching patches of images常用领域是纹理合成,超分辨率和图像修复。
2. 参数~A variational sampling approach to generating images (Kingma & Welling, 2013) has had some success, but the samples often suffer from being blurry变分采样,会出现模糊。
Another approach generates images using an iterative forward diffusion process (Sohl-Dickstein et al.,2015)
Generative Adversarial Networks(Goodfellow et al., 2014) generated images suffering from being noisy and incomprehensible GAN会出现噪声和不能理解的图像
A laplacian pyramid extension to this approach (Denton et al., 2015) showed higher quality images
A recurrent network approach (Gregor et al., 2015) and a deconvolution network approach (Dosovitskiy et al., 2014) have also recently had some success with generating natural images
DCGAN的架构技巧:
1. 不使用pooling层:the all convolutional net which replaces deterministic spatial pooling functions (such as maxpooling) with strided convolutions(strided convolutions——discriminator,fractional-strided
——generator)
2. 不使用全连接层:the trend towards eliminating fully connected layers on top of convolutional features 在100维z经矩阵乘法输出以后reshape得到了4×4×1024的卷积层
3. 使用BN层:Batch Normalization which stabilizes learning by normalizing the input to each unit to have zero mean and unit variance(both the generator and the discriminator )
4. 激活函数:generator除最后一层用ReLU,最后一层为Tanh;discriminator全部用LearkyReLU
对无监督表示学习的测评方法:evaluating the quality of unsupervised representation learning algorithms is to apply them as a feature extractor on supervised datasets and evaluate the performance of linear models fitted on top of these features
重用神经网络中的部分层作为特征提取器进行有监督学习。
【1】UNSUPERVISED REPRESENTATION LEARNING WITH DEEP CONVOLUTIONAL
GENERATIVE ADVERSARIAL NETWORKS
1896

被折叠的 条评论
为什么被折叠?



