Paper:
Title:Attention Is All You Need
-
What’s main claim? Key idea?
This paper proposed a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely.
-
Is the idea neat? Is it counter-intuitive?
I think it’s a neat idea. The Transformer is the first transduction model relying entirely on self-attention to compute representations of its input and output without using sequence-aligned RNNs or convolution. So the Transformer allows for significantly more parallelization and it is more suitable for GPU frame.
-
Is it useful to my work?
Yes, recently, I was learning the pre-training model Mockingjay and TERA. These two models and other pre-training models are almost based on Transformer. Understanding Transformer’s model structure and Attention mechanism can help me to finish my work better.
-
Is this a paper worth following?
Yes. Transformer is a very popular model recently. If you are constrained by sequential computation, this paper is worth reading. By the way, I find this paper difficult to understand. So it’s a good way to help reading by reading related blogs. Now I still can’t fully understand this paper, I will continue to study in the follow-up work.