AdaAttN: Revisit Attention Mechanism in Arbitrary Neural Style Transfer论文讲解

提示:文章写完后,目录可以自动生成,如何生成可参考右边的帮助文档


前言

本文是针对ICCV(2021)论文:AdaAttN: Revisit Attention Mechanism in Arbitrary Neural Style Transfer的讲解。


一、文章贡献?

示例:pandas 是基于NumPy 的一种工具,该工具是为了解决数据分析任务而创建的
直接看图三作者提出的注意力机制的主图,和SANET基本保持一直,首先是根据风格图片和内容图片的特征计算注意力图,然后仍然是乘上一个风格特征,最后补充内容特征,但是不同的是作者提出了在这个过程中计算一个标准差。
在这里插入图片描述
在这里插入图片描述
这是因为方差等于平方的期望减期望的平方,有如下推导:
Var = E[(X-μ)²] = E[X²-2Xμ+μ²] = E(X²)-2μ²+μ² = E(X²)-μ²
因此这和以往将特征乘上一个方差再加一个均值是类似的操作。

实验效果图:
在这里插入图片描述

总结

从实验结果来看,这篇论文的效果还是非常不错的,实测中尤其是对人脸效果更佳,简单描述一下,有什么问题再补充。

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
eep Learning: Recurrent Neural Networks in Python: LSTM, GRU, and more RNN machine learning architectures in Python and Theano (Machine Learning in Python) by LazyProgrammer English | 8 Aug 2016 | ASIN: B01K31SQQA | 86 Pages | AZW3/MOBI/EPUB/PDF (conv) | 1.44 MB Like Markov models, Recurrent Neural Networks are all about learning sequences - but whereas Markov Models are limited by the Markov assumption, Recurrent Neural Networks are not - and as a result, they are more expressive, and more powerful than anything we’ve seen on tasks that we haven’t made progress on in decades. In the first section of the course we are going to add the concept of time to our neural networks. I’ll introduce you to the Simple Recurrent Unit, also known as the Elman unit. We are going to revisit the XOR problem, but we’re going to extend it so that it becomes the parity problem - you’ll see that regular feedforward neural networks will have trouble solving this problem but recurrent networks will work because the key is to treat the input as a sequence. In the next section of the book, we are going to revisit one of the most popular applications of recurrent neural networks - language modeling. One popular application of neural networks for language is word vectors or word embeddings. The most common technique for this is called Word2Vec, but I’ll show you how recurrent neural networks can also be used for creating word vectors. In the section after, we’ll look at the very popular LSTM, or long short-term memory unit, and the more modern and efficient GRU, or gated recurrent unit, which has been proven to yield comparable performance. We’ll apply these to some more practical problems, such as learning a language model from Wikipedia data and visualizing the word embeddings we get as a result. All of the materials required for this course can be downloaded and installed for FREE. We will do most of our work in Numpy, Matplotlib, and Theano. I am always available to answer

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值