![](https://img-blog.csdnimg.cn/20201014180756927.png?x-oss-process=image/resize,m_fixed,h_64,w_64)
Autoencoder
Omni-Space
专注Android, Mobile Security and AI
展开
-
Autoencoder 详解
本文介绍了AutoEncoder。包括如下内容:AutoEncoder的定义和推导。Sparse AutoEncoder由来和介绍。做Deep Learning所用的unsupervised learning的方法之间的比较。1. 数学基础1.1 Orthogonal Matrix 满足如下定义的是Orthogonal MatrixAAT=I,AA转载 2017-10-27 14:30:20 · 12592 阅读 · 0 评论 -
GAUSSIAN MIXTURE VAE: LESSONS IN VARIATIONAL INFERENCE, GENERATIVE MODELS, AND DEEP NETS
Not too long ago, I came across this paper on unsupervised clustering with Gaussian Mixture VAEs. I was quite surprised, especially since I had worked on a very similar (maybe the same?) concept a few...转载 2018-03-12 13:53:33 · 1863 阅读 · 1 评论 -
An intuitive understanding of variational autoencoders without any formula
I love the simplicity of autoencoders as a very intuitive unsupervised learning method. They are in the simplest case, a three layer neural network. In the first layer the data comes in, the secon转载 2018-02-06 10:32:05 · 644 阅读 · 0 评论 -
Denoising Autoencoder for Collaborative Filtering on Market Basket Data
The ipython notebook with the complete code and dataset is available at the following link.In this tutorial, we will apply denoising autoencoder on market basket data for collaborative filtering. Th转载 2017-11-08 14:34:35 · 719 阅读 · 0 评论 -
A wizard’s guide to Adversarial Autoencoders: Part 3, Disentanglement of style and content.
“If you’ve read the previous two parts you’ll feel right at home implementing this one.”← Part 2: Exploring latent space with Adversarial Autoencoders.Parts 1 and 2 were mainly concerned with ge转载 2017-10-30 15:04:56 · 546 阅读 · 0 评论 -
A wizard’s guide to Adversarial Autoencoders: Part 2, Exploring latent space with Adversarial Autoen
“This article is a continuation from A wizard’s guide to Autoencoders: Part 1, if you haven’t read it but are familiar with the basics of autoencoders then continue on. You’ll need to know a little转载 2017-10-30 15:03:17 · 915 阅读 · 0 评论 -
A wizard’s guide to Adversarial Autoencoders: Part 1, Autoencoder?
“If you know how to write a code to classify MNIST digits using Tensorflow, then you are all set to read the rest of this post or else I’d highly suggest you go through this article on Tensorflow’s转载 2017-10-30 15:01:53 · 1284 阅读 · 0 评论 -
TensorFlow for Hackers (Part VII) - Credit Card Fraud Detection using Autoencoders in Keras
It’s Sunday morning, it’s quiet and you wake up with a big smile on your face. Today is going to be a great day! Except, your phone rings, rather “internationally”. You pick it up slowly and hear some转载 2017-10-12 04:11:40 · 2013 阅读 · 1 评论 -
当我们在谈论 Deep Learning:AutoEncoder 及其相关模型
引言AutoEncoder 是 Feedforward Neural Network 的一种,曾经主要用于数据的降维或者特征的抽取,而现在也被扩展用于生成模型中。与其他 Feedforward NN 不同的是,其他 Feedforward NN 关注的是 Output Layer 和错误率,而 AutoEncoder 关注的是 Hidden Layer;其次,普通的 Feedforward转载 2017-10-27 14:42:20 · 1986 阅读 · 0 评论 -
Tensorflow Day17 Sparse Autoencoder
今日目標了解 Sparse Autoencoder了解 KL divergence & L2 loss實作 Sparse AutoencoderGithub Ipython Notebook 好讀完整版當在訓練一個普通的 autoenoder 時,如果嘗試丟入一些輸入,會看到中間許多的神經元 (hidden unit) 大部分都會有所反應 (activate).反應的意转载 2017-10-27 14:39:39 · 1030 阅读 · 0 评论 -
Tensorflow Day18 Convolutional Autoencoder
今日目標了解 Convolutional Autoencoder實作 Deconvolutional layer實作 Max Unpooling layer觀察 code layer 以及 decoderGithub Ipython Notebook 好讀完整版Introduction讓我們仔細來看一下之前所實作的 Autoencoder 的網路結構,不管它的 en转载 2017-10-27 14:38:37 · 4507 阅读 · 0 评论 -
Tensorflow Day19 Denoising Autoencoder
今日目標了解 Denoising Autoencoder訓練 Denoising Autoencoder測試不同輸入情形下的 Denoising Autoencoder 表現Github Ipython Notebook 好讀完整版Introduction什麼是 denoising 呢?意思就是把去除雜訊的意思,也就是說這裡的 autoencoder 有把輸入的雜訊转载 2017-10-27 14:37:01 · 616 阅读 · 0 评论 -
Tensorflow Day16 Autoencoder 實作
今日目標實作 Autoencoder比較輸入以及輸出Github Ipython Notebook 好讀完整版實作定義 weight 以及 bias 函數1234def weight_variable(shape, name): return tf.Variable(tf.truncated_normal(shape转载 2017-10-27 14:35:29 · 534 阅读 · 0 评论 -
谷歌大脑Wasserstein自编码器:新一代生成模型算法
白悦、许迪 变分自编码器(VAE)与生成对抗网络(GAN)是复杂分布上无监督学习主流的两类方法。近日,谷歌大脑 Ilya Tolstikhin 等人提出了又一种新思路:Wasserstein 自编码器,其不仅具有 VAE 的一些优点,更结合了 GAN 结构的特性,可以实现更好的性能。该研究的论文《Wasserstein Auto-Encoders》已被即将在 4 月 30 日于温哥华举行的 ICL...转载 2018-04-19 12:12:03 · 4194 阅读 · 0 评论